Skip to main content

Can machine learning turn things dangerous?



The creature doesn't need consciousness and intelligence to be dangerous. But when we create learning AI we must realize that those systems can learn skills that make them unpredictable. 

The other thing is that biological computers that are microchips that communicate with living neurons are things. That can someday be more intelligent than we are. And when we make those things, we are making another intelligent creature that can be more intelligent than we are. 

In some SciFi books, the artificial intelligence rips itself out of control and turns against humanity. One of the versions is that the alien spacecraft makes crashland and then its cre dies. The robot's mission is to protect the crew from attack against the planet's native creatures. 

There is a vision that some kind of earthquake can suddenly destroy the nuclear command center. And that causes the vision that the militarized AI interprets that thing as a purpose attack. And then it makes the counterstrike against some other nation. 

Learning machines are dangerous if they have tools for that thing. If the purpose of a learning machine is to lead an army, that thing is always dangerous. The ultimate example is the robot. That controls killer robots on the battlefield. In those cases, the system turns dangerous because its purpose is to be dangerous. 

There is a vision that AI will not rebel. The reason for that is this: AI has no consciousness. And it cannot imagine things. Then we can ask are insects like wasps and hornet organisms with consciousness? And what kind of consciousness do bacteria have? Those things can be dangerous if people go too close to them. And in some visions, the AI can try to shoot even nuclear weapons, if it has access to it, if somebody tries to shut down the computer that runs the AI. 

In that case, the computer that guards the nuclear weapons interprets that action as an attempt to harm the nuclear shield. And if somebody forgets to tell the AI that there is some service for its hardware it can think that an attempt to shut down the central processing unit is action from undercover enemy agents. 

We can think that non-organic computers and AI don't have consciousness. But they can react in devastating ways. That means the computers can have reflecs that make it dangerous. 



The biological computer is always the brain in a vat. 


If we someday create a biological computer with a cloned brain, we face a situation that we create a creature, that is more intelligent than we are. We can create mini-brains by using cloned neurons. 

There are visions of computers that are more intelligent than humans. And one of those systems is the biological computer. The biocomputer can be the brain that is under a glass dome. And the regular computers translate that EEG that the binary and quantum systems can cooperate with those brains that we can call "thinking units". 

The biological computer, connected with the quantum computer is the most powerful computer or data-handling tool, in the world. The system lays on a binary system and it can remote-control robots that clean the base, and serve the system. 

But the different situation is with the biological computers. The biological computers are all some kind of brains in a vat. Biologial microchips are hybrid tools. That have regular microchips connected with living neural tissue have their own will. Those neurons act like all other neurons. And they form a brain that defends itself. 

Consciousness makes the creature support its species. Things like mini brains make it possible that in the distant future, there could be computer centers where living brains are under glass domes. Those brains are connected with life support systems. And that kind of biological computer can also control remote-controlled robots. 

Each of those brains can be cloned human brain. And they can be as intelligent as humans. The problem is that we cannot control that system very well. In those systems those "think units" might operate through robots that bring nutrients to those systems. This kind of system might be extremely dangerous if they see some kind of threat. 


https://www.helsinki.fi/en/hilife-helsinki-institute-life-science/news/development-human-derived-mini-brain-close-completion-new-technical-solution-promotes-treatment-brain-diseases-0


https://scitechdaily.com/not-science-fiction-anymore-what-happens-when-machine-learning-goes-too-far/


https://en.wikipedia.org/wiki/Brain_in_a_vat


https://learningmachines9.wordpress.com/2024/02/09/can-machine-learning-turn-things-dangerous/


Comments

Popular posts from this blog

The LK-99 could be a fundamental advance even if it cannot reach superconductivity in 400K.

The next step in superconducting research is that LK-99 was not superconducting at room temperature. Or was it? The thing is that there is needed more research about that material. And even if it couldn't reach superconductivity in 400K that doesn't mean that material is not fundamental. And if LK-99 can maintain its superconductivity in 400K that means a fundamental breakthrough in superconducting technology.  The LK-99 can be hype or it can be the real thing. The thing is, anyway, that high-voltage cables and our electric networks are not turning superconducting before next summer. But if we can change the electric network to superconducting by using some reasonable material. That thing can be the next step in the environment. Superconductors decrease the need to produce electricity. But today cooling systems that need lots of energy are the thing that turn superconductors that need low temperatures non-practical for everyday use.  When the project begins there is lots of ent

Black holes, the speed of light, and gravitational background are things that are connecting the universe.

 Black holes, the speed of light, and gravitational background are things that are connecting the universe.  Black holes and gravitational waves: is black hole's singularity at so high energy level that energy travels in one direction in the form of a gravitational wave.  We normally say that black holes do not send radiation. And we are wrong. Black holes send gravitational waves. Gravitational waves are wave movement or radiation. And that means the black holes are bright gravitational objects.  If we can use water to illustrate the gravitational interaction we can say that gravitational waves push the surface tension out from the gravitational center. Then the other quantum fields push particles or objects into a black hole. The gravitational waves push energy out from the objects. And then the energy or quantum fields behind that object push them into the gravitational center.  The elementary particles are quantum fields or whisk-looking structures. If the gravitational wave is

The CEO of Open AI, Sam Altman said that AI development requires a similar organization as IAEA.

We know that there are many risks in AI development. And there must be something that puts people realize that these kinds of things are not jokes. The problem is how to take control of the AI development. If we think about international contracts regarding AI development. We must realize that there is a possibility that the contract that should limit AI development turns into another version of the Nuclear Non-Proliferation Treaty. That treaty didn't ever deny the escalation of nuclear weapons. And there is a big possibility that the AI-limitation contracts follow the route of the Nuclear Non-Proliferation Treaty.  The biggest problem with AI development is the new platforms that can run every complicated and effective code. That means the quantum computer-based neural networks can turn themselves more intelligent than humans. The AI has the ultimate ability to learn new things. And if it runs on the quantum-hybrid system that switches its state between binary and quantum states,