Skip to main content

The lack of deep knowledge is the problem with AI.




The problem that slows the development of the large language models, LLM is this. The next-generation system should also have a deep knowledge of what the words mean. When we use the conventional AI or LLM the system selects the keywords, and then it connects data from the different internet pages into the new entirety. 

Deep-learning AI is harder to program, and then we must understand, that even if the AI has a long list of determinators in databases, that are connected with the words we must realize that the AI doesn't still think. It might have many determinators but the problem is that. This thing is only an enhanced version of the LLM. Even if every single word of the language is connected with thousands of words of explanation, the AI will not understand those words. It just connects those things and creates one new layer to the AI. 


(Above) The neural- and KAN network structures. From above those layers would look like normal 2D networks. (Below). The image below is a more precise introduction to KAN networks than the image above. There you can see the layer between those networks. 




The neural networks are the tools that give extremely powerful calculating capacity to the system. There are no limits to the size of neural networks. However traditional neural networks are hard to administrate because of their 3D structure. The answer to the administration problem could be in Kolmogorov-Arnold (KAN) networks. 

When we think about the LLM and the new enhanced LLM the neural network architecture that runs those algorithms. The Kolmogorov-Arnold Networks or KAN networks can also be effective. The 2D structure makes those networks easier to administrate than traditional networks. The researchers can put KAN networks into the pleats, like fabric. 

The KAN network can be a large-scale system. But the pleated structure makes it more compact. The 2D KAN networks can create a hierarchical structure but otherways than traditional networks. In KAN networks each network layer is an independent network. In that case, the KAN networks are like floors of buildings. So we can call this network structure a KAN tower. The KAN looks like the synapse of the neuron. 

The system is more powerful and easier to administrate than traditional neural networks. The upper layer (or floor) of the network can act as the independent network. The system can transport data through each KAN layer and then the user gets output or solution when data travels through the KAN tower. And as I wrote every floor in the tower is independent KAN or it can act as part of the entirety. 


The neural- or KAN networks interconnect users. 

The major problem is: can the AI be intelligent without consciousness? Sometimes is told, that the robot that has consciousness defends itself. This thing is one of the things that don't need very high-level consciousness. The system must only use identify friends of foe IFF system. The wrong IFF signal activates the self-defense algorithm. The AI makes the resistance where it is programmed, using equipment there it has access to use. 

If people say that AI requires the command to begin its operations, we must realize one thing. When a defense system's sensor sees something that requires a reaction that thing acts as a trigger for the AI. The AI can follow screens using surveillance cameras, and if there is the word "alert" the system connects the AI to the database, which name is "alert". There the AI finds systems that it needs to react and effect to the threat. 

If access to the area is not permitted and intruders have the wrong IFF marking, that system can act against intruders. The wrong IFF signal acts as a trigger that connects the system to the database there are instructions for reactions. If we want to make combat androids that follow our orders, a little bit of customized LLM models are enough. 

If we want to make AI-robot systems that can emulate humans, making a robot that says "ouch" when we step on its toes is easy. The system must only have buttons on the toes that activate this reaction. Even analog systems can say "ouch" if we push the right button. 

If we want the robots to be dangerous, they are dangerous. If we want to make an AI that beats humans in air combat, we can make very big trouble with that thing. If we want to maximize this threat. We can put that AI into the most powerful stealth interceptor that we find without testing procedures. 

If we make a robot that defends itself that doesn't require very high-scale consciousness. The robot must shoot everything that does not have the right identification friend- or-foe, IFF code. These kinds of robots are not conscious but they can affect us. 


https://www.freethink.com/robots-ai/model-collapse-synthetic-data


https://www.freethink.com/robots-ai/simple-bench


https://www.quantamagazine.org/novel-architecture-makes-neural-networks-more-understandable-20240911/

Comments

Popular posts from this blog

Today Friedrich Nietzsche's overman is more topical than ever before.

 Today Friedrich Nietzsche's overman is more topical than ever before.  The overman and the AI: AI will never make decisions for humans. Except if humans will not allow the AI to make that thing. If people who have free will allow the AI to make decisions for them, that thing is made possible. The AI can take control.  But in this case, humans allow it to make that thing. Humans are things that give the tools to AI to take control. In the same way, we can ask, "Why do some people, like bloody dictators rise to power"? The answer is simple. People who support those dictators are also responsible for dictators staying in control  Friedrich Nietzsche and his "overman" or "over human" theory that the creature with higher morals, higher capacity, and higher intelligence level abilities will rise over "regular people". Nietzsche created his idea from the people's need to be under control, and he mentioned that character in his novel "Antic

The string theory offers a new way to calculate Pi.

"Scientists discovered a new series for pi through string theory research, echoing a 15th-century formula by Madhava. By combining Euler-Beta Functions and Feynman Diagrams, they modeled particle interactions efficiently. Credit: SciTechDaily.com" (ScitechDaily, String Theory Unravels New Pi Formula: A Quantum Leap in Mathematics) People normally think that. The pi is the ratio of the circumference circle's circumference to the circle's diameter. The Pi is a mathematical constant 3.14159..., the endless decimal number. The Pi is interesting because developers can use that decimal number to make the encryption algorithms stronger.  The idea is that the encryptions program hides the message's original ASCII numbers by multiplicating those numbers with some decimal number. Or the system can add some numbers to those ASCII numbers.  "Aninda Sinha (left) and Arnab Saha (right). Credit: Manu Y" (ScitechDaily, String Theory Unravels New Pi Formula: A Quantum Le

Schrödinger's cat: and the limits of that idea.

"In quantum mechanics, Schrödinger's cat is a thought experiment concerning quantum superposition". (Wikipedia, Schrödinger's cat). But the same thing can use as model for many other thought experiments.  Sooner or later, or at least in the ultimate end of the universe, the Schrödinger's cat will turn into wave movement. The information that this cat involved exists but the cat does not exist in its material form. The information doesn't ever vanish. It just turns its shape.  We are all trapped in the universe and time. The universe is the space that is entirety to us. There are no confirmed other universities. But the multiverse is a logical continuum for the expanding galactic megastructures.  The problem with natural things is this. They are black and white. They exist or do not exist. Could there be something, that exists and not exists at the same time?  Scrödinger's cat is thinking experiment about case their cat is not dead or not alive. But in this