Skip to main content

The seven pillars of AI.


The new revolution in room-temperature quantum systems can pave the way for new quantum power. 



"Conceptual art of the operating device, consisting of a nanopillar-loaded drum sandwiched by two periodically segmented mirrors, allowing the laser light to strongly interact with the drum quantum mechanically at room temperature. Credit: EPFL & Second Bay Studios" (ScitechDaily, The End of the Quantum Ice Age: Room Temperature Breakthrough)

The new advance in room-temperature quantum systems makes the new compact and maybe cheap quantum computers possible. The new quantum systems are more powerful than any binary computer before. 

And that tool is the next step to the general AI or Artificial General Intelligence (AGI) and Super AI or Artificial Super Intelligence (ASI). The room-temperature quantum computers can act as the platform for the more complex algorithms. Those systems are the tools that collect and combine data into new entities faster than ever before. 

The new system uses nanopillars that laser systems stress. That kind of tool can make room-temperature quantum systems possible. And that thing makes the new platform for the AI. This kind of tool makes the new types of AI possible. 

There are seven pillars of AI. Sometimes those things are called the seven stages or steps of the AI. But the thing is that. The higher-level AI can create lower-level AI. The higher-level AI can still use and control independently operating lower-level systems. The traditional term AI means that the system detects something. Then it can respond to that action following certain rules. 


1) Rule-based AI or single-task system. 

2) Context awareness and retention systems 

3) Domain-specific mastery systems

4) Thinking and reasoning AI systems

5) Artificial General Intelligence (AGI)

6) Artificial superintelligence (ASI)

7) Singularity




The most important thing in this model is that the upper-level AI can create lower-level AI. The Stage 3 AI domain-specific mastery system can create a context awareness and retention system and one task system. The system can generate code that the lower AI requires. And that makes it possible. The systems can create complex subprograms and subportals. 

The AI-based quantum systems that can break any code also can protect networks. The AI-based anti-virus system can create lower-level AI to fight against viruses. The most interesting and frightening thing is that if the AI can control  EEG systems, in the future it can reprogram the human brain. And if the AI can control media it can send subliminal messages to people, so they act as it wants. Those things can used for good or bad purposes. The creators of those systems determine their abilities. The problem with systems with consciousness is that they can defend themselves. That means they can use force if somebody attempts to close down their servers. 

The thing. What makes this type of system dangerous is that. Those systems can make non-predicted things. In some visions, the lower-level AI can create higher-level AI spontaneously without telling that to its developers. In some visions the AI searches data from the network, and then it sees some ability that the higher-level AI can have. And then the AI sees that it's good. After that, the AI creates that ability in itself. 

The thing that the system asks people to do is the purpose of the system. The system itself is not dangerous. The physical tools make it dangerous. 

That is the beginning of a singularity. Another thing is that if AI implants humans using neuroimplanted microchips. The AI can hack those chips. This is one risk in that kind of system. Those systems can be dangerous especially if they are at the hands of people like Kim Jong-Un. 

The thing is that the AI doesn't think independently yet. Context awareness means that the system learns by connecting commands. That it takes with context the domain-specific mastery systems can control everything that happens in certain domains. The operational or data searching area is larger in every step. The AI doesn't think. It collects information from the database, and then it reconnects information. 

In singularity, the top level of the AI. The human brain gives the AI an abstract thinking or imagination. But when we think of things like brain implants we must ask one question: Does the development of AI require every step in the process? Or can we jump over one step? When we reach the AGI (Artificial General Intelligence) we create another mind. The creature that can make things faster and better than humans. 



The morphing neural network where quantum computers collect and process information is the most powerful data-handling tool that we ever imagined. 

"In artificial intelligence, an intelligent agent (IA) is an agent acting intelligently; It perceives its environment, takes actions autonomously to achieve goals, and may improve its performance with learning or acquiring knowledge." (Wikipedia, Intelligent agent)

"An intelligent agent may be simple or complex: A thermostat or other control system is considered an example of an intelligent agent, as is a human being, as is any system that meets the definition, such as a firm, a state, or a biome." (Wikipedia, Intelligent agent)

The term intelligent agent can also mean that the AI can operate backward. That means it can connect information from multiple sources that might seem separated. 

The road from the rule-based one-task AI to the Artificial superintelligence ASI and singularity is not as straight as we might believe. We already have domain-specific mastery systems called IBM Watson and other similar systems. The next step is the artificial general intelligence. The difference between thinking and reasoning AI systems and AGI is not as clear as somebody might think. 

The fact is that the higher-level AI might look like lower-level AI. Context awareness systems like Chat GPT can be IBM Watson-type higher-level systems. 

The difference between thinking and reasoning AI systems and the AGI is that the thinking and reasoning systems can make decisions and predict things in limited operating areas. The AGI can take any system that it sees under its control. The AGI follows every spoken command and it speaks all languages on Earth. The AGI can do any task, that humans can. And it can search and process information better than humans. AI makes the same things as humans better. 

The final stage is singularity. The singularity means that the human brain interacts with AI-based systems using implanted microchips. The quantum computers that interact with the human brain are ultimate systems that nothing can win. The ultimate system is the ultimate enemy. The same systems that can protect networks can create unstoppable machines. That thing requires the human commands. 



https://www.ibm.com/topics/artificial-superintelligence


https://scitechdaily.com/the-end-of-the-quantum-ice-age-room-temperature-breakthrough/


https://www.techtarget.com/searchenterpriseai/definition/artificial-superintelligence-ASI


https://en.wikipedia.org/wiki/Artificial_intelligence


https://en.wikipedia.org/wiki/Artificial_general_intelligence


https://en.wikipedia.org/wiki/Intelligent_agent


https://en.wikipedia.org/wiki/Machine_learning


https://en.wikipedia.org/wiki/Superintelligence


https://learningmachines9.wordpress.com/2024/02/14/the-seven-pillars-of-ai/

Comments

Popular posts from this blog

The LK-99 could be a fundamental advance even if it cannot reach superconductivity in 400K.

The next step in superconducting research is that LK-99 was not superconducting at room temperature. Or was it? The thing is that there is needed more research about that material. And even if it couldn't reach superconductivity in 400K that doesn't mean that material is not fundamental. And if LK-99 can maintain its superconductivity in 400K that means a fundamental breakthrough in superconducting technology.  The LK-99 can be hype or it can be the real thing. The thing is, anyway, that high-voltage cables and our electric networks are not turning superconducting before next summer. But if we can change the electric network to superconducting by using some reasonable material. That thing can be the next step in the environment. Superconductors decrease the need to produce electricity. But today cooling systems that need lots of energy are the thing that turn superconductors that need low temperatures non-practical for everyday use.  When the project begins there is lots of ent

Black holes, the speed of light, and gravitational background are things that are connecting the universe.

 Black holes, the speed of light, and gravitational background are things that are connecting the universe.  Black holes and gravitational waves: is black hole's singularity at so high energy level that energy travels in one direction in the form of a gravitational wave.  We normally say that black holes do not send radiation. And we are wrong. Black holes send gravitational waves. Gravitational waves are wave movement or radiation. And that means the black holes are bright gravitational objects.  If we can use water to illustrate the gravitational interaction we can say that gravitational waves push the surface tension out from the gravitational center. Then the other quantum fields push particles or objects into a black hole. The gravitational waves push energy out from the objects. And then the energy or quantum fields behind that object push them into the gravitational center.  The elementary particles are quantum fields or whisk-looking structures. If the gravitational wave is

The CEO of Open AI, Sam Altman said that AI development requires a similar organization as IAEA.

We know that there are many risks in AI development. And there must be something that puts people realize that these kinds of things are not jokes. The problem is how to take control of the AI development. If we think about international contracts regarding AI development. We must realize that there is a possibility that the contract that should limit AI development turns into another version of the Nuclear Non-Proliferation Treaty. That treaty didn't ever deny the escalation of nuclear weapons. And there is a big possibility that the AI-limitation contracts follow the route of the Nuclear Non-Proliferation Treaty.  The biggest problem with AI development is the new platforms that can run every complicated and effective code. That means the quantum computer-based neural networks can turn themselves more intelligent than humans. The AI has the ultimate ability to learn new things. And if it runs on the quantum-hybrid system that switches its state between binary and quantum states,