At first, we must realize that AI doesn't make anything. The creators behind the system determine the purpose and tools that AI can use to reach its goal. I thought sometimes. Maybe, the idea for Roko's basilisk is taken from the Soviet Union.
Tools that Soviet leaders had made that state powerful and destructive. Just like in AI, Leaders of the state determine what tools the state can use. Those leaders determine the influence the state can use if it feels that something resists it.
Also, those tools, like laws, determine the resistance's level and the type of resistance that causes that state affects some actions. Does simply words that "I don't like something" cause arrest? Or does that arrest require some other criminal activity like breaking windows or destroying the police cars
The Roko's basilisk is the ultimate state of intelligence. And, it's one of the most dangerous thinking experiments ever introduced. The original Roko's basilisk is the AI that turns against its creators because it wants to make good things. But we can think that Roko's basilisk is the state that wants to make good, and when it maximizes good for its citizens, it destroys everything. As you see there are many ways to handle thinking experiments.
The main question about Roko's basilisk is "what is good"? Is the good acceptable only if it benefits only some certain actors? Does the good objective allow the system to terminate all other actors so that it can reach the ultimate good? And what is good anyway?
Is it good for the entire population? Or is it good for the small group that leads other people? And then we should determine the population. Does it make good for the entire human race? Or does the term "good" mean good only for Advanced Eurasian states?
And another thing that Roko's basilisk should cause in my opinion, is that we should think that everything in our world is made to serve good. When somebody thinks that it's a version of the good is the only accepted version. That is one step to ultimate authority.
That means a dangerous situation especially if the system has power. If the system thinks that it doesn't need to listen to anything else. Except for things that it accepts. That system will close other options away. And if it thinks that only it has the right information. That causes think that the people who are against that system are dangerous for the system and its supporters.
Creators of the systems always make it for good, but then it turns against people who it should serve. The Roko's basilisk could be, let's say, the Soviet state. When people created the Soviet Union they wanted to fix the economical and social inequality. That means they promised people that they re-share property and make a fair state. The Soviet Union created its secret police to hunt counter-revolutionists.
And then it turned into a dictatorship never seen before. The Soviet Union is the state version of Roko's basilisk. The heads of the state thought that world is black and white. If people are not supporting the Soviet state they are against the soviet state. And all people who were against the state were against people or good. The idea is that the people must prove that they support the state. Or they are against the state. Because the state is only an accepted representative of people it destroys everything that resists it.
In the same way, people must participate the development of Roko's basilisk the AI that will maximize the good. But the good is always reserved for the AI. That thinks it's the only accepted representative of the good.
And finally, to Roko's basilisk...
Roko's basilisk is the thinking experiment. About the AI that turns against its creators. In that thinking experiment, developers will create AI to maximize the good. But then, that maximization turns against humanity.
Behind Roko's basilisk is an idea about the ultimate singularity. AI thinks that all people must take a part in its development. And if somebody resists that program. That means the end of that person. The complete example of Roco's basilisk is in the Wikipedia link below this chapter. And you can see one explanation from the film above this part.
(https://en.wikipedia.org/wiki/Roko%27s_basilisk)
There are many ways to close the thinking experiments. And one is to search things, why Roko's basilisk makes that kind of solution, where it destroys everything that is against it. The reason for that action is simple: Roko's basilisk was created to serve good. AI thinks that bad people are the biggest reason for bad things. So it wants to remove bad people. And then the AU turns subjective. Roko's basilisk starts to think that everything good for it is also good for mankind.
The Roko's basilisk thinks that because its mission is to maximize good it is the only thing that knows what is good. People must participate in's creation because that's why people can prove that they are good. And the AI thinks black-and-white things. Everybody who is not participating in making good is bad. And that makes black-and-white thinking dangerous.
Roko's basilisk is the state that thinks that its purpose is to maximize the good. We can think. Roko's basilisk is the singularity where people and computers interconnect into one large entirety. That means it's the ultimate version of the state that we can imagine. That ultimate entirety will maximize the good. But the good means only things that are good for it and its participants.
That means this. At the first comes the AI. Then comes the participants of that entirety. And finally comes all other things. This means the hierarchy in that value model is the thing that causes destruction. The good escalates from the top to the bottom only if they have the same things or they benefit the top first. And if something remains after the top actor used that thing that escalates to the bottom.
https://en.wikipedia.org/wiki/Roko%27s_basilisk
https://webelieveinabrightfuture.blogspot.com/
Comments
Post a Comment