Open AI markets the new AI-tool called Chat GPT agent. That tool is possible to use as a customized AI assistant. When we think about AI and its training, we can compare that thing with training dogs. The trainer must be patient and finalizing the tool is the longest period in the process. It’s possible that the AI is ready in a very short time. But then trimming that tool takes a long time. In that process things like non-wanted reactions are removed. And the system will deny giving wrong or wrong type answers. The AI must have some kind of profile that its users need. Then the user groups are training AI to give responses to all normal situations.
The AI must be carefully trained if it must give some kind of customer service. The AI must follow the customer’s orders but refuse to follow orders that the user must not have any permissions. Those things are spyware and other malicious software making. If the AI has no regulations to refuse orders to make or show things like computer virus code that means the AI can turn into a tool that the hackers can use to create malware like spying tools.
AI is a powerful tool that can make many things for their users. The AI is a perfect tool to make spyware and other kinds of malware tools. And that makes AI dangerous. The AI can create code that can transform in a very short time. That kind of code is sometimes impossible to detect. AI can be a tool that creates that kind of thing. This is the reason why developers must be careful about things like what kind of things users must have the right to make and what they should not make. AI assistants like Chat GPT agents are tools that can turn wet daydreams for people like Kim Jong-Un. The AI assistants can make hackers work more effectively.
Complicated computer viruses called logical bombs that destroy databases in a certain moment can make the worst of all nightmares true. The logical bomb that is connected to weapon launch can shutdown the warships or even the entire nation’s defense. Logical bomb is the malicious program that erases things like databases and other things like that. The database erasing can remove targets from nuclear weapons. Or it can launch a security protocol and the nuclear weapon control can think that the weapon launch is unauthorized.
The logical bomb in the system is always dangerous. The thing is that AI is always dangerous, if humans want to make it dangerous. AI that can interact or command physical tools are dangerous if there is some kind of error in programming. Or if something changes programming at the wrong moment. The complex programming can include complex malware. When we use AI we must remember a golden rule in data security. The golden rule is this: the thing that the programmer or program’s maker doesn't tell can be more important than things that they tell.
So, those people can always tell lies. And we must prepare ourselves for the possibility that somebody collects information without permission. People like Kim Jong-un have power. They can search for Western criminals among their allies. They can offer drugs and guns to people who cooperate with them. And those people who own their country can also offer safe places for Western gangsters. Western criminals can establish companies that train those AI assistants for Kim Jong-un. Those companies can operate under the name of Western people, who might smuggle drugs. North Korean chemical factories can create those drugs for Western markets. And they need effective guns and a deliverer for those chemicals. But people who work in those companies can be North Korean agents.
https://www.rudebaguette.com/en/2025/07/theyre-training-it-like-a-bomb-sniffing-dog-inside-openais-high-stakes-effort-to-prevent-a-chatgpt-meltdown/
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.