Skip to main content

Roko's basilisk, thinking experiment about a system that means good but makes bad things.



At first, we must realize that AI doesn't make anything. The creators behind the system determine the purpose and tools that AI can use to reach its goal. I thought sometimes. Maybe, the idea for Roko's basilisk is taken from the Soviet Union. 

Tools that Soviet leaders had made that state powerful and destructive.  Just like in AI, Leaders of the state determine what tools the state can use. Those leaders determine the influence the state can use if it feels that something resists it. 

Also, those tools, like laws, determine the resistance's level and the type of resistance that causes that state affects some actions. Does simply words that "I don't like something" cause arrest? Or does that arrest require some other criminal activity like breaking windows or destroying the police cars 

The Roko's basilisk is the ultimate state of intelligence. And, it's one of the most dangerous thinking experiments ever introduced. The original Roko's basilisk is the AI that turns against its creators because it wants to make good things. But we can think that Roko's basilisk is the state that wants to make good, and when it maximizes good for its citizens, it destroys everything. As you see there are many ways to handle thinking experiments. 

The main question about Roko's basilisk is "what is good"? Is the good acceptable only if it benefits only some certain actors? Does the good objective allow the system to terminate all other actors so that it can reach the ultimate good? And what is good anyway? 

Is it good for the entire population? Or is it good for the small group that leads other people? And then we should determine the population. Does it make good for the entire human race? Or does the term "good" mean good only for Advanced Eurasian states?

 And another thing that Roko's basilisk should cause in my opinion, is that we should think that everything in our world is made to serve good. When somebody thinks that it's a version of the good is the only accepted version. That is one step to ultimate authority.

That means a dangerous situation especially if the system has power.  If the system thinks that it doesn't need to listen to anything else. Except for things that it accepts. That system will close other options away. And if it thinks that only it has the right information. That causes think that the people who are against that system are dangerous for the system and its supporters. 

Creators of the systems always make it for good, but then it turns against people who it should serve. The Roko's basilisk could be, let's say, the Soviet state. When people created the Soviet Union they wanted to fix the economical and social inequality. That means they promised people that they re-share property and make a fair state.  The Soviet Union created its secret police to hunt counter-revolutionists. 

And then it turned into a dictatorship never seen before. The Soviet Union is the state version of Roko's basilisk. The heads of the state thought that world is black and white. If people are not supporting the Soviet state they are against the soviet state. And all people who were against the state were against people or good. The idea is that the people must prove that they support the state. Or they are against the state. Because the state is only an accepted representative of people it destroys everything that resists it. 

In the same way, people must participate the development of Roko's basilisk the AI that will maximize the good. But the good is always reserved for the AI. That thinks it's the only accepted representative of the good. 



And finally, to Roko's basilisk...




Roko's basilisk is the thinking experiment. About the AI that turns against its creators. In that thinking experiment, developers will create AI to maximize the good. But then, that maximization turns against humanity.

Behind Roko's basilisk is an idea about the ultimate singularity. AI thinks that all people must take a part in its development. And if somebody resists that program. That means the end of that person. The complete example of  Roco's basilisk is in the Wikipedia link below this chapter. And you can see one explanation from the film above this part. 


(https://en.wikipedia.org/wiki/Roko%27s_basilisk)

There are many ways to close the thinking experiments. And one is to search things, why Roko's basilisk makes that kind of solution, where it destroys everything that is against it. The reason for that action is simple: Roko's basilisk was created to serve good. AI thinks that bad people are the biggest reason for bad things. So it wants to remove bad people. And then the AU turns subjective. Roko's basilisk starts to think that everything good for it is also good for mankind. 

The Roko's basilisk thinks that because its mission is to maximize good it is the only thing that knows what is good. People must participate in's creation because that's why people can prove that they are good. And the AI thinks black-and-white things. Everybody who is not participating in making good is bad. And that makes black-and-white thinking dangerous.

Roko's basilisk is the state that thinks that its purpose is to maximize the good. We can think. Roko's basilisk is the singularity where people and computers interconnect into one large entirety. That means it's the ultimate version of the state that we can imagine. That ultimate entirety will maximize the good. But the good means only things that are good for it and its participants. 

That means this. At the first comes the AI. Then comes the participants of that entirety. And finally comes all other things. This means the hierarchy in that value model is the thing that causes destruction. The good escalates from the top to the bottom only if they have the same things or they benefit the top first. And if something remains after the top actor used that thing that escalates to the bottom. 


https://en.wikipedia.org/wiki/Roko%27s_basilisk


https://webelieveinabrightfuture.blogspot.com/


Comments

Popular posts from this blog

The LK-99 could be a fundamental advance even if it cannot reach superconductivity in 400K.

The next step in superconducting research is that LK-99 was not superconducting at room temperature. Or was it? The thing is that there is needed more research about that material. And even if it couldn't reach superconductivity in 400K that doesn't mean that material is not fundamental. And if LK-99 can maintain its superconductivity in 400K that means a fundamental breakthrough in superconducting technology.  The LK-99 can be hype or it can be the real thing. The thing is, anyway, that high-voltage cables and our electric networks are not turning superconducting before next summer. But if we can change the electric network to superconducting by using some reasonable material. That thing can be the next step in the environment. Superconductors decrease the need to produce electricity. But today cooling systems that need lots of energy are the thing that turn superconductors that need low temperatures non-practical for everyday use.  When the project begins there is lots of ent

Black holes, the speed of light, and gravitational background are things that are connecting the universe.

 Black holes, the speed of light, and gravitational background are things that are connecting the universe.  Black holes and gravitational waves: is black hole's singularity at so high energy level that energy travels in one direction in the form of a gravitational wave.  We normally say that black holes do not send radiation. And we are wrong. Black holes send gravitational waves. Gravitational waves are wave movement or radiation. And that means the black holes are bright gravitational objects.  If we can use water to illustrate the gravitational interaction we can say that gravitational waves push the surface tension out from the gravitational center. Then the other quantum fields push particles or objects into a black hole. The gravitational waves push energy out from the objects. And then the energy or quantum fields behind that object push them into the gravitational center.  The elementary particles are quantum fields or whisk-looking structures. If the gravitational wave is

The CEO of Open AI, Sam Altman said that AI development requires a similar organization as IAEA.

We know that there are many risks in AI development. And there must be something that puts people realize that these kinds of things are not jokes. The problem is how to take control of the AI development. If we think about international contracts regarding AI development. We must realize that there is a possibility that the contract that should limit AI development turns into another version of the Nuclear Non-Proliferation Treaty. That treaty didn't ever deny the escalation of nuclear weapons. And there is a big possibility that the AI-limitation contracts follow the route of the Nuclear Non-Proliferation Treaty.  The biggest problem with AI development is the new platforms that can run every complicated and effective code. That means the quantum computer-based neural networks can turn themselves more intelligent than humans. The AI has the ultimate ability to learn new things. And if it runs on the quantum-hybrid system that switches its state between binary and quantum states,