Skip to main content

The thing is that AI requires control. But when we want to control something, we must have the right arguments to justify those things.

 The thing is that AI requires control. But when we want to control something, we must have the right arguments to justify those things.

Non-democratic governments use the same algorithms that are used to search web cheaters for searching people, who resist governments. And AI-based programming tools can create new operating systems for missiles.  And they can be used to create malware tools for hackers, who can work for some governments. 

We living in a time where a single person can keep ultimate power in their hands. The Internet and media are turning tools that can conduct ultimate power. And there is the possibility that some hackers will capture the homepages of some news agencies. That means the person who makes those things must not be some governmental chief. An ordinary hacker can capture and change information on governmental homepages. The problem is that we are waking up to a situation in that in some countries the media has the purpose to support governments. 

And another thing is that we faced the situation, that some hackers are operating under governmental control. That means they have permission to do their work. And in countries where censorship isolates people inside the internet firewall that allows only internal communication, the position of a governmental hacker offers free use of the internet, which is a luxury for people who cannot see even Western music videos. 

We see many times arguments against AI. And the biggest and most visible arguments are not big things. That we should be afraid of. We should afraid of things like AI-controlled weapons, and then we must understand is that robots and AI democratize warfare. The biggest countries are not always winners. Ukraine would lose to Russia many times without robots. 

We don't understand that the same system that delivers pizza from drones can use to drop hand grenades in enemy positions. That technology is deadly in the wrong hands. But is the thing that we think of as a "threat" the threat that some terrorists can send the drone and drop grenades in some public place, or is it that we can no anymore predict the winner of the conflicts as easily as before? If we support the wrong side, that thing causes problems. 

AI is a game changer in warfare, and that's why we must control that thing. In the same way, we should start to control advanced manufacturing tools like 3D printers. The 3D printers can make guns. Maybe those guns do have not the same quality as the Western army guns. But criminals can use those tools. 

And when we see the quality of the Russian military armament, we can think that the 3D printer can make a gun with the same quality as the Russian military used guns. But is that the reason why we resist that technology? Or is it that if we transfer all practical work robots we don't have human henchmen?

In this case, we must say, that it's not cool to be boss to robots. Robots are tools. They are machines, and that means if we want to yell something at robots, it's not the same thing as we would yell at human henchmen. Robots don't care what kind of social skills people have. And if we yell to robots that thing is the same as we would yell to some drill or wrench. 



Another thing when we talk about AI and algorithms is that the Russian, Chinese, Iranian, and other non-democratic governments use the same algorithms that are used to search web cheaters for searching people, who dare to resist the government. 

Then we must realize that things like automatized AI-based encoding tools are making it possible to create ultimate tools for hacking. And those tools can use to create computer viruses that are taking nuclear power plants under control. Professional nuclear security experts say that it's impossible to take nuclear power plants under control. There is always a local manual switch. 

That drops the adjustment rods in a nuclear reactor. But then we must understand that if that emergency shutdown switch is in the nuclear reactor hall, and the protective water layer that absorbs nuclear radiation is lost turning the reactor off is impossible. So that switch must be outside the reactor room so that the operator can turn the reactor off. Only small error in drawings causes the emergency system not to operate as it should.  

And that is the thing that makes the AI an ultimate tool. The AI can control things and follow that people who have the responsibility to control that everything is done right are making their job. It can search for weaknesses in those drawings. But if its databases are corrupted. That thing can turn AI into the worst nightmare that we ever seen. 

We must control AI development better. But the arguments that people see are wrong. The worst cases are the free online AI applications that can generate any application that people dare to ask for. This kind of AI-based system can turn into a virus or malware generator that can infect any system in the world. And in some scenarios, the hacker who doesn't know what system that person uses can cause nuclear destruction or even begin a nuclear war. 

If the hacker accidentally slips into the nuclear commanding system and thinks that is some kind of game, that thing can cause a situation where the system opens fire with nuclear warheads. One possible scenario is that hacker just crosslinks some computer game to a nuclear command system. Or the hacker accidentally adjusts the speed of the centrifugal separator. That separates the nuclear material for use in nuclear power plants. In that case the system can make too rich nuclear fuel. And that causes the nuclear reactor melts down. 

AI is the ultimate tool that makes life easier, but that same tool is the ultimate weapon in the wrong hands. So ultimate tool can turn into the ultimate enemy. But when we are looking at arguments against AI the excuse is not thing that AI can create ultimate data weapons, or AI can control armies. The argument is that AI takes jobs from bosses, and AI makes better jobs than humans. 

Same way as many times before, privacy and other kinds of things like legislation are things, that are used against the AI. Rules, prohibitions, and other things are artificial tools. They are very weak tools if the argument is that people should do something because that guarantees their privacy. Privacy and data security are things that force people to use things like paper dictionaries and books because the information is more secure when a person cannot use things like automatized translation programs. 

The fact is that AI requires control, but the arguments must be something else, than prohibiting the AI development or use of AI tools guarantees the position of the human boss. The fact is this. Things like privacy are small things. If we compare them with the next-generation AI, which can create software automatically. Privacy is an important thing, but how private our life could be? We can see things like is a person under guardianship just by looking at their ID papers. 

Things like working days in the office are always justified by using social relationships as an argument. But how many words do you say to other people during your working day? When we face new things we must realize that nothing is black and white. Some things are always causing problems. New things always cause resistance. And of course, somebody can turn the food delivery robot into a killer robot, by equipping them with machine guns. 

Those delivery tools allow people to have access to somebody's home address. But same way if we use some courier service for transporting food to us, we must give our home address. And there is the possibility that the food courier is some drug addict. That thing always causes problems with data security. But we don't care about those things, because there is human on the other side. And maybe that thing makes the AI frightening. AI is nothing that we can punish. We cannot mirror how good we are to robots. 

Maybe the threat that we see when we talk about robot couriers and AI is that we lose something holy. We lose the object that is worse than we are. Robots are like dolls. We can say everything that we want to the robot, and the robot is always our henchman. And that is one of the things that is making the AI frightening. We think that the AI is like a henchman, and what if we lose a chess game to AI? 



Comments

Popular posts from this blog

The LK-99 could be a fundamental advance even if it cannot reach superconductivity in 400K.

The next step in superconducting research is that LK-99 was not superconducting at room temperature. Or was it? The thing is that there is needed more research about that material. And even if it couldn't reach superconductivity in 400K that doesn't mean that material is not fundamental. And if LK-99 can maintain its superconductivity in 400K that means a fundamental breakthrough in superconducting technology.  The LK-99 can be hype or it can be the real thing. The thing is, anyway, that high-voltage cables and our electric networks are not turning superconducting before next summer. But if we can change the electric network to superconducting by using some reasonable material. That thing can be the next step in the environment. Superconductors decrease the need to produce electricity. But today cooling systems that need lots of energy are the thing that turn superconductors that need low temperatures non-practical for everyday use.  When the project begins there is lots of ent

Black holes, the speed of light, and gravitational background are things that are connecting the universe.

 Black holes, the speed of light, and gravitational background are things that are connecting the universe.  Black holes and gravitational waves: is black hole's singularity at so high energy level that energy travels in one direction in the form of a gravitational wave.  We normally say that black holes do not send radiation. And we are wrong. Black holes send gravitational waves. Gravitational waves are wave movement or radiation. And that means the black holes are bright gravitational objects.  If we can use water to illustrate the gravitational interaction we can say that gravitational waves push the surface tension out from the gravitational center. Then the other quantum fields push particles or objects into a black hole. The gravitational waves push energy out from the objects. And then the energy or quantum fields behind that object push them into the gravitational center.  The elementary particles are quantum fields or whisk-looking structures. If the gravitational wave is

The CEO of Open AI, Sam Altman said that AI development requires a similar organization as IAEA.

We know that there are many risks in AI development. And there must be something that puts people realize that these kinds of things are not jokes. The problem is how to take control of the AI development. If we think about international contracts regarding AI development. We must realize that there is a possibility that the contract that should limit AI development turns into another version of the Nuclear Non-Proliferation Treaty. That treaty didn't ever deny the escalation of nuclear weapons. And there is a big possibility that the AI-limitation contracts follow the route of the Nuclear Non-Proliferation Treaty.  The biggest problem with AI development is the new platforms that can run every complicated and effective code. That means the quantum computer-based neural networks can turn themselves more intelligent than humans. The AI has the ultimate ability to learn new things. And if it runs on the quantum-hybrid system that switches its state between binary and quantum states,