AI and the end of the world.
The end of the world can start. When the AI-controlled defense systems start to use weapons in a crisis. In simulations. AI used weapons more sensitive than humans. This means that the AI is not suitable for making decisions in cases where the crisis has not yet escalated. Into weapon use. The AI can choose nuclear weapons more often than humans in those simulations. Here we can see one of the most frightening. And at the same time, interesting ways in which AI “thinks” about those simulations.
The AI takes those simulations as a chess game or some other game. And the AI’s goal is to win the game. That means the AI takes the most effective and powerful weapons. Another thing is that the AI always selects the easiest models that are possible. There is one big difference between humans and AI. That difference is that the AI doesn’t think like we do. The AI doesn’t care about human victims.
The only goal that the system has. It is to win the game. The AI plays these kinds of strategic simulations like it plays regular games. In the same way as some Atari chess simulations, the AI has certain points in losses when it takes a nuclear arsenal to use. When the AI reached a certain level of losses. It saw that it needed more firepower. The problem with the simulations is that they are never. Like real people.
Simulating real crises is a little bit more difficult to create. Than some shooting games. Those systems require psychological aspects. The thing that causes uncertainty in those simulations is that AI or any human participants in those simulations don’t think like Putin or Trump. They don’t know how big a price real leaders are willing to pay. The lack of a psychological aspect is the thing. That causes problems with the AI. The AI plays to win the game, and the clearest way. To win the game is to destroy the opponent. This is the thing that causes very big problems. If somebody wants to negotiate, the AI will not recognize that action.
There is a possibility that AI attacks anyway, because it analyzes actions that somebody plays over time and position to make an attack. The AI analyzes games like this: the side that has fewer points or units is losing. And that means the AI doesn’t think of casualties, like humans. It only thinks that the opponent must have fewer points. This makes AI dangerous. If it translates to negotiations and withdrawal causes a loss of points. That makes the AI attack enemies. The AI makes those things because it is programmed to keep its own points higher than its opponent's points. And that can cause a catastrophe in real life.
https://www.politico.com/news/magazine/2025/09/02/pentagon-ai-nuclear-war-00496884
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.