Skip to main content

The AI's Achilles heel.

 


"University of Copenhagen researchers have proven that fully stable Machine Learning algorithms are unattainable for complex problems, highlighting the critical need for thorough testing and awareness of AI limitations. Credit: SciTechDaily.com" (ScitechDaily,AI’s Achilles Heel: New Research Pinpoints Fundamental Weaknesses)



The AI's Achilles heel.


The environment always changes. And that means the AI must have the ability to morph itself. The static algorithms do not fit everywhere. If we think of things like self-driving cars, the working sites require a little bit different actions than highways. The AI should also have winter, fog, and bad weather modes.

Thinking that static algorithms fit everything is wrong. The system requires flexibility and morphing abilities to answer the challenges. The same system cannot do everything.

One of the biggest problems with AI is that it doesn't think, as we know "thinking". AI just collects information from certain areas. That could be university, governmental, and Wikipedia sources. That means some errors in those home pages cause problems with the AI. If the homepage is marked as "trusted", the AI handles it as trusted.

Another thing that causes problems with AI is that people have too big expectations about it. The language model that people see is the user interface that routes information for the sub- or back-office applications that work backward. Those applications are different programs, and the language model just gives orders to them.

The AI has limits. In the same way, all other systems have their limitations. Even humans have limitations. All people cannot drive cars. When we think about the car that dives autonomously, that system requires two AI systems. The first AI is the language model that allows command of the system. The second stage is the complicated program, that drives the car from point A to point B.

In practical solutions making programs, how the car should drive from point A to point B is much harder than making programs how the ICBM missile must fly. The ICBM requires two points the beginning point and the target point. The beginning point for its trajectory comes from the GPS. Then it flies to the target by using a ballistic trajectory.

But the car must react to the animals and humans, that come in front of it. The "green T-skirt problem" means a situation where it's programmed to follow the traffic lights. The green light means "go". But how the system can react to people? Who has a green T-shirt or green spot on their clothes? In the worst case, the AI translates this thing as "go". That's why the autopilot is used in cars.

Should limited to highways. And in those cars should be GPS that connects autopilot in city areas. In traffic, there are so many variables that the autopilot and programmers never notice everything.

The use of autopilots in city areas should limited to small-size vehicles. The food-delivery robots are normally like small cars. But in the future, those duties could transferred to human-looking GP (General Purpose) robots.

Another problem with AI is that it sees things precisely as they are. The same thing that makes face identification successful makes robots hard to operate in normal environments. That thing means that if a robot knows how to open a metal door, but comes to the front of a wooden door, the robot will not recognize that thing as a door.

In fuzzy logic, the system knows the wireframe of the door. When it sees the thing that looks like a door it simply takes the handle. And pushes and pulls it. We all push doors that we should pull and otherwise. Same way, the robot can test the direction of how the door opens.

If a robot transports things, like food delivery, it uses the GPS to navigate to the right house. Then it sees the door searches the door phone or calls the customer. Then it comes in and searches the floor. Then the system starts to use precise logic to search door number and name.

Walking on the streets is a complicated thing for robots. Robots must know how to act in traffic lights and how to open doors. Things like food delivery offer good things to test the AI algorithms.

But when we use AI, we must realize that these kinds of complicated systems are not thinking. They just collect information. The AI corrects things when it faces something unknown it will transfer the image to a human operator. That system will just send an image to the screen in front of the operator and say that it requires actions.


https://scitechdaily.com/ais-achilles-heel-new-research-pinpoints-fundamental-weaknesses/


https://learningmachines9.wordpress.com/2024/01/25/the-ais-achilles-heel/

Comments

Popular posts from this blog

The LK-99 could be a fundamental advance even if it cannot reach superconductivity in 400K.

The next step in superconducting research is that LK-99 was not superconducting at room temperature. Or was it? The thing is that there is needed more research about that material. And even if it couldn't reach superconductivity in 400K that doesn't mean that material is not fundamental. And if LK-99 can maintain its superconductivity in 400K that means a fundamental breakthrough in superconducting technology.  The LK-99 can be hype or it can be the real thing. The thing is, anyway, that high-voltage cables and our electric networks are not turning superconducting before next summer. But if we can change the electric network to superconducting by using some reasonable material. That thing can be the next step in the environment. Superconductors decrease the need to produce electricity. But today cooling systems that need lots of energy are the thing that turn superconductors that need low temperatures non-practical for everyday use.  When the project begins there is lots of ent

Black holes, the speed of light, and gravitational background are things that are connecting the universe.

 Black holes, the speed of light, and gravitational background are things that are connecting the universe.  Black holes and gravitational waves: is black hole's singularity at so high energy level that energy travels in one direction in the form of a gravitational wave.  We normally say that black holes do not send radiation. And we are wrong. Black holes send gravitational waves. Gravitational waves are wave movement or radiation. And that means the black holes are bright gravitational objects.  If we can use water to illustrate the gravitational interaction we can say that gravitational waves push the surface tension out from the gravitational center. Then the other quantum fields push particles or objects into a black hole. The gravitational waves push energy out from the objects. And then the energy or quantum fields behind that object push them into the gravitational center.  The elementary particles are quantum fields or whisk-looking structures. If the gravitational wave is

The CEO of Open AI, Sam Altman said that AI development requires a similar organization as IAEA.

We know that there are many risks in AI development. And there must be something that puts people realize that these kinds of things are not jokes. The problem is how to take control of the AI development. If we think about international contracts regarding AI development. We must realize that there is a possibility that the contract that should limit AI development turns into another version of the Nuclear Non-Proliferation Treaty. That treaty didn't ever deny the escalation of nuclear weapons. And there is a big possibility that the AI-limitation contracts follow the route of the Nuclear Non-Proliferation Treaty.  The biggest problem with AI development is the new platforms that can run every complicated and effective code. That means the quantum computer-based neural networks can turn themselves more intelligent than humans. The AI has the ultimate ability to learn new things. And if it runs on the quantum-hybrid system that switches its state between binary and quantum states,