The AI's Achilles heel.
The environment always changes. And that means the AI must have the ability to morph itself. The static algorithms do not fit everywhere. If we think of things like self-driving cars, the working sites require a little bit different actions than highways. The AI should also have winter, fog, and bad weather modes.
Thinking that static algorithms fit everything is wrong. The system requires flexibility and morphing abilities to answer the challenges. The same system cannot do everything.
One of the biggest problems with AI is that it doesn't think, as we know "thinking". AI just collects information from certain areas. That could be university, governmental, and Wikipedia sources. That means some errors in those home pages cause problems with the AI. If the homepage is marked as "trusted", the AI handles it as trusted.
Another thing that causes problems with AI is that people have too big expectations about it. The language model that people see is the user interface that routes information for the sub- or back-office applications that work backward. Those applications are different programs, and the language model just gives orders to them.
The AI has limits. In the same way, all other systems have their limitations. Even humans have limitations. All people cannot drive cars. When we think about the car that dives autonomously, that system requires two AI systems. The first AI is the language model that allows command of the system. The second stage is the complicated program, that drives the car from point A to point B.
In practical solutions making programs, how the car should drive from point A to point B is much harder than making programs how the ICBM missile must fly. The ICBM requires two points the beginning point and the target point. The beginning point for its trajectory comes from the GPS. Then it flies to the target by using a ballistic trajectory.
But the car must react to the animals and humans, that come in front of it. The "green T-skirt problem" means a situation where it's programmed to follow the traffic lights. The green light means "go". But how the system can react to people? Who has a green T-shirt or green spot on their clothes? In the worst case, the AI translates this thing as "go". That's why the autopilot is used in cars.
Should limited to highways. And in those cars should be GPS that connects autopilot in city areas. In traffic, there are so many variables that the autopilot and programmers never notice everything.
The use of autopilots in city areas should limited to small-size vehicles. The food-delivery robots are normally like small cars. But in the future, those duties could transferred to human-looking GP (General Purpose) robots.
Another problem with AI is that it sees things precisely as they are. The same thing that makes face identification successful makes robots hard to operate in normal environments. That thing means that if a robot knows how to open a metal door, but comes to the front of a wooden door, the robot will not recognize that thing as a door.
In fuzzy logic, the system knows the wireframe of the door. When it sees the thing that looks like a door it simply takes the handle. And pushes and pulls it. We all push doors that we should pull and otherwise. Same way, the robot can test the direction of how the door opens.
If a robot transports things, like food delivery, it uses the GPS to navigate to the right house. Then it sees the door searches the door phone or calls the customer. Then it comes in and searches the floor. Then the system starts to use precise logic to search door number and name.
Walking on the streets is a complicated thing for robots. Robots must know how to act in traffic lights and how to open doors. Things like food delivery offer good things to test the AI algorithms.
But when we use AI, we must realize that these kinds of complicated systems are not thinking. They just collect information. The AI corrects things when it faces something unknown it will transfer the image to a human operator. That system will just send an image to the screen in front of the operator and say that it requires actions.
https://scitechdaily.com/ais-achilles-heel-new-research-pinpoints-fundamental-weaknesses/
https://learningmachines9.wordpress.com/2024/01/25/the-ais-achilles-heel/
Comments
Post a Comment