If we want to use some system, we must know its good and bad things. And only complete knowledge of the system makes it safe. We must realize that all tools we create have positive and negative ways to use. So everything is not black and white. When people are concerned about their privacy, the argument against them is that for example, the use of faked identities can help to search for pedophiles and drug dealers.
In the same way, privacy protection helps other criminals hide. But the same tools are effective in the hands of the people like Kim Jong-Un. The user of the system determines what is the purpose of the system. When people are concerned about AI and their privacy, we must notice that the same people are not worried about things like firearms.
Firearms protect their homes, but for some reason, the people's privacy must be so strong, that things like Mafia can use it as a shield against authorities. In the same way, Chinese and North Korean governments create AI that can be used as a surveillance tool for governments. The same tools that are used to create animations can used to create fake information. The same systems that are used to track pedophiles can be used to track opposition. When AI makes many good things, we must realize that AI is not only a good thing.
The AI is also a looming actor in the hands of bad actors. The bad actors can use AI to create cyber attacks and propaganda tools. Generative AI is one of the tools that can give tools for advanced cyber attacks in the hands of the actors who have not advanced technical and programming skills. The opposite argument is generative AI can also create tools that can fight against malicious software. Another problem is that AI is a "perfect tool". It can see if a person lies just following the body language.
And the AI is an ultimate tool for searching and following things like stock markets. Cyber attacks against those AIs can turn them against their users. In refined attack modes, the attacker tries to corrupt the AI, like trying to involve code that will put the lie detector off, when it sees some mark. If the attacker just destroys the AI, that thing is visible.
Generative AI makes it possible to create complicated tools that can manipulate the data, that AI gives. In some models, attackers will use the AI that observes stock marketing to introduce certain companies better than they are. So AI drives money to those companies. A couple of years ago this type of attack would have been impossible.
But today generative AI makes it possible to create complicated and refined tools. That allows the system to make data injection into the files. That the AI uses. This is the reason, why the system must observe size and writing day all the time. The easiest way to corrupt the AI is simply to change some files from its code. And that's why those system's security must guaranteed.
Generative AI can create ultrarealistic animations that bad actors can use as propaganda tools. That misinformation is hard, or almost impossible to separate from authentic images.
Another thing is that bad actors can use AI to create false information. The AI is an excellent propaganda tool. And bad actors can use AI-based tools to create dis- and misinformation like fake film material. The AI can create faked films with photorealistic images like animations. And that kind of tool makes it possible to create photorealistic animation, there is used AI-created images.
Another problem is AI-created images. Those images make it possible to create photo- or ultrarealistic animations. And those animations can used as propaganda tools. The AI can create realistic-looking films. And it can manipulate that character's voice and way of talking so that it looks like some real person.
Ultrarealistic animations can used to destroy people's reputations. This kind of system can also manipulate truth as much as its users want. Those systems are excellent tools for creating disinformation. And if somebody wants to deliver disinformation they need that disinformation to deliver. Ultrarealistic animations can used in many things. In that kind of technology, news reporters can interview people, who are already dead.
https://scitechdaily.com/ai-powered-bad-actors-a-looming-threat-for-2024-and-beyond/
https://learningmachines9.wordpress.com/2024/01/29/the-ai-is-an-excellent-tool-for-cyber-and-propaganda-operations/
Comments
Post a Comment