Saturday, May 17, 2025

AI: the illusion of consciousness.



We are facing an interesting problem. When we talk about consciousness.  We call certain actions and reactions consciousness. When somebody asks whom and what you are, we answer: "I'm "Jack", I'm male, and I'm a member of the human species". "Or I'm "Jill", female and a member of the human species".  Maybe, we can tell that we are engineers, researchers, teachers, etc. And that means that we can say that we understand who we are. Or is that thing so easy? 

The AI can answer that question like this: "I'm Gemini, or Bing, a Large language model. A large group of computer algorithms that can search information". That seems very impressive.

But we can actually make the voice file that this kind of question can activate. We can make multiple inputs that include the most common ways to ask "Who and what are you? ". There are multiple ways to ask questions about technical details, and one answer fits all.

But then we can ask: can the machine understand what it says? We can even teach 4-year-old kids to tell about things like DNA molecules. The idea is that those 4-year-old children can repeat worlds. They can learn complicated terms as words. But they will not understand the meaning of those words. In the same way, we can read almost any language in the world if we have descriptions of how to pronounce those words. The phonetic markings and written words are easy to say, but in that case, we don't know what that word means. 

The machine and consciousness are topics for interesting philosophical discussions. When we think about a machine that has consciousness we can ask how we can see that consciousness. If we think that consciousness is an ability to defend itself the AI can turn violent if somebody tries to shut it down. But then we can turn the computer dangerous even without the complicated AI. 

We can write a program that if the person tries to shut the computer, the computer must remove that person. There are two ways to make that thing. We can put the extra cover to protect the main power switch. And if somebody raises that cover that activates the removement program. The other way is harder. We must describe the situation where the person is going to shut the system down. That parameter can be that the voltage level goes too low. Then the system can turn to use backup power input. 

This thing doesn't make machines intelligent or conscious. The machine reacts to things that are programmed into its memories. The illusion of consciousness is the result of multiple algorithms or computer programs' cooperation. When programmers make the AI, or LLM, they create the computer program for every case that the system must respond to. The large number of reactions creates the illusion of consciousness. 

When we think about the algorithms and their complicated codes, we must realize that the computers don't think. They follow the direct line of the codes. The AI that uses the morphing neural networks can be effective. Only if it can follow multiple code lines at the same time. The neural network allows the system to drive multiple linear codes at the same time. The system can have multiple inputs and multiple connections. There it can combine information. The thing that makes a linear program an algorithm is this. The system can retake the program or event if there is no match in the answer. 

But the thing that makes this system effective is that: the AI will not retake the situation with the same program. The algorithms are like a bunch of programs that are sorted by their purpose. Those bunches or databases can involve algorithms that sort under topics like "social situations", "walking on streets", and "visiting shops". 

So if one of those programs does not give the right response, the AI selects another program. And the neural network allows it to run multiple programs. At the same time. 

The number of data-handling lines determines the effect of the system. The neural network allows the system to drive multiple variables side by side. And that makes the system effective. 

When we think about cases like 2001, A Space Odyssey where the fictional HAL9000 supercomputer kills almost the entire crew of the spacecraft, we can say that the little shortcut in the programmer's head can cause that kind of situation. The computer has orders to remove everything that risks the mission. 

And then it sees the case, that somebody slips the tool. That causes a situation in which HAL9000 thinks that the person who slips the tool is the risk and that it must be removed. The reason for the situation is that the programmer forgets to describe the risks and things that the system must remove. The system's logic is that the person who cannot make the mission 100% right is garbage. 

Things like AI or LLMs are complicated. Those things require training. The system requires a description of the garbage and the human should confirm every movement that the computer should do. If the computer does not have a valid description of the garbage. The problem is that the computer doesn't think. It follows its programs. 

If the robot has not got a description of important merchandise. And we order it to clean the office, it also removes furniture. The robot doesn't make any difference between furniture and dust. It thinks that all things in the office are garbage if the important merchandise is not described for the AI. That means the AI requires very accurate programming. The programmers must understand that one error in billions of code lines can cause failure in the AI. 


https://bigthink.com/neuropsych/the-illusion-of-conscious-ai/


https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey


https://en.wikipedia.org/wiki/HAL_9000


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Researchers found interesting things in brains.

Researchers found new interesting details in human brains. First, our brains transmit light. And that light gives an interesting idea: could...