Showing posts with label neural networks. Show all posts
Showing posts with label neural networks. Show all posts

Tuesday, June 3, 2025

Large language models and fuzzy logic.



Large language models (LLMs) are problematic for programmers. They require a new way of thinking about programming. The key element in those systems is the input mode or input port. That understands spoken language. The system requires a model that transforms spoken language into text and then drives that text to the computer. And the text must be in the form that the computer can understand and turn it into commands that it can use. The system must also turn dialects into literal language that it can use for commands.  This is the first thing that requires work. The programmer must teach every single word to the system. 

The practical solution is to turn the word into numbers. In regular computing. Every letter has a numeric code called the ASCII code. The capital A (big A) has the decimal code 65. The programmer must realize that the small "a" has a different numeric code than the capital A. The little "a"'s ASCII decimal code is 141. That's why things like passwords require precise letters and if there is a capital letter in the wrong place the password is wrong. 

So, if we want to make the system more effective. We can give a numeric value for every single word that we find in the dictionary book. We can simply take the dictionary book and then give serial numbers for those words. The word "aback" can get the number code 1 (one). That thing makes it easier to refer to those words. Every word must be programmed separately into the system. And that makes programming hard. The other thing is. If we want to use dialects we must also program those words into the LLM, 's input gate. That programming is not very complicated, but it requires a lot of work. 



Diagram: Neural network


In human brains, neurons are the event handlers. In artificial, non-organic, non-biological computer networks, or computer neural networks computers or microprocessors are those event handlers. In human brains, thousands or even millions of neurons participate in the data-handling process. Those neurons make fuzzy logic to the brain. 

The idea of fuzzy logic is that many precise logical cases can make the system mimic the fuzzy logic. Fuzzy logic is a collection of precise logical answers. 

Another thing is that we must make a system that uses fuzzy logic. Making fuzzy logic is not possible itself. But we can create a series of event handlers that make the system seem like fuzzy logic. The idea is taken from the human nervous system. When a large number of neurons participate in the thinking process that makes the system virtually fuzzy. Every single neuron uses the precise (YES/NO) logic but every single neuron has a little bit different point of view to the problem. 

So the system uses a model that looks like the grey scale. There is the white that means YES and black that means NO. And then there are "maybe cases" between those YES and NO cases. Those "maybes" are the absolute logical event handlers like neurons. When that group of event handlers gets its mission, every single event handler selects YES or NO. Then the system calculates how many YES, and how many NO solutions it has. So those event handlers give votes to the solution. 

The model is taken from quantum computers. In quantum computers, data, or information travels in strings and finally, every string has values 0 (zero) and 1 (one). You might wonder how much power that kind of system requires if every event handler must process information. Before it answers. But then we face a situation where the system must answer "maybe". Another way to say "maybe" is XNOT (or X-NOT). Or if the answer is closer to "yes" another way to say that thing is XYES (or X-YES). X means that the system waits for more data.  

The system might say. That it does not have enough information in the data matrix. That is a large group of databases or datasets. And that is the major problem with AI. If the votes on the scale of "YES to NO" are equal that means the system has a problem. If the AI controls the robot that is in the middle of the road and votes are equal that robot can just stand in the middle of the road. Another thing that we must realize is that these kinds of systems are the input gates. Data handling begins after the system gets information into it. 


https://en.wikipedia.org/wiki/ASCII 



Monday, May 12, 2025

How to teach AI?



Morphing neural networks are very fast tools to drive advanced AI-based systems. Those complicated neural networks can involve thousands or even millions of microchips. That allows them to combine data from memory and sensors with extreme accuracy and speed. Teaching AI to operate in a real environment is a complicated process. And the thing is that the morphing neural networks allow the network to drive multiple missions at the same time. 

How to teach AI? Computer memory and microchips are interesting tools. They are very accurate, and that sometimes makes AI training very complicated. If we want to make an AI that recognizes humans, we are in trouble. If we want to make an AI that recognizes certain people like some famous actor, like Tom Cruise, we can make that thing quite easily. We must just have images that are from all angles. Or we must ask that person to put their head into some certain position. Then the system can compile pixels that the CCD camera inputs into the system with images that are in the system memories. In the first case, the neural network can give fast recognition if all the CCD pixels can give an individual data input to the neural network. The system compiles all images that are in the computer's memory and then the system can say, that the person is Tom Cruise.

 If the system can compile all images that are taken from around the faces from different angles. That system makes recognition very fast. But then we face the problem: we know that all people are not Tom Cruises. We must start to globalize face and body images to computers so that they can tell that they see humans. So we must take one step back when we want to recognize that an object is human. 


*******************************

When the computer turns a certain person's image to match with species. Or globalize that image with humans as a species the system must remove accuracy.  That means it must remove pixels or replace them with grey pixels and then it can compile that silhouette with a silhouette that is stored in its memory. 


*******************************'


Normally we recognize persons in certain series. At first, we see characters and then we recognize that character is human, and then after a couple of steps, we recognize that person. But then we must make the AI that recognizes humans and their gender. That means we take a couple of steps back from the individual to global things. We must realize that there must be some common things, the lowest common denominator that we must find in people, is that it recognizes humans as a species. That thing is called fuzzy logic. In precise logic, we must put every person's image on this planet to AI. 

That system gives the personal data of every person that it sees. But that kind of thing makes the system heavy and slow. Precise logic is sometimes easy to cheat. Simply changing glasses is sometimes enough to cheat the systems that use precise logic. There are systems. That must not completely see the match to make an alarm. In those systems certain percentage of the matching pixels causes alarm. There is the possibility that when the computer recognizes only humans it takes images of humans, and then it removes details. When it removes pixels the system combines the image with silhouettes. That is stored in its memories. 


https://www.quantamagazine.org/how-can-ai-id-a-cat-an-illustrated-guide-20250430/

Wednesday, September 6, 2023

The next-generation quantum chip looks like a chessboard.

The next-generation quantum chip looks like a chessboard. 


The ideal neural network is multiple independently operating microprocessors. The most powerful version of that kind of system is a neural quantum network. The neural quantum network is like a regular neural network. That works on binary computers. 

However, the neural quantum computer is more powerful than any binary network ever can be.  The reason why researchers are working in this kind of area is that. New material types require extremely highly accurate cooperation between sensors and systems that manipulate molecules and atoms. 

The brand new quantum chip makes the controlled quantum neural network closer than ever before. The new quantum chip has 16 squares. Those squares can activated by using number and letter combinations. In that model, the quantum computing system can use a similar system as a chessboard. 

The chessboard-like structure where every single square is independently operating microchips. In that kind of structure, microchips can operate independently trying to solve multiple problems every time. The CPU (Central Processing Unit) or the top processor that shares information with those processors can cut information into pieces. 

Then that system can send those bites to all processors. And after data travels around chips they can deliver their answer back to the CPU that collects information bites back to one entirety. In that model group of binary computers or binary microchips can operate as virtual quantum computers. This structure makes the system more powerful than regular computers. 


"Photograph of the quantum chip hosting the 16 quantum dot crossbar array, seamlessly integrated to a chessboard motif. Every quantum dot, like a pawn on a chessboard, is uniquely identifiable and controllable using a coordinate system of letters and numbers. Photo credit: Marieke de Lorijn for QuTech. Credit: Marieke de Lorijn for QuTech" (ScitechDaily.com/Checkmate! Quantum Computing Breakthrough Via Scalable Quantum Dot Chessboard)


The chessboard-looking microchip entirety can operate with multiple programs at the same time. And that makes this kind of system a suitable control unit for man-shaped robots. But the thing that can be a game-changer could be the superconductor, which superconductivity the system can adjust by using pressure. 

Adjusting pressure is possible to create superconducting material where superconducting can cut when the system doesn't require it. This kind of system can be suitable for next-generation mass memories. 

The ADNR (Aggregated Diamond NanoRods) are the strongest known materials. They are stronger than diamonds. The reason for that is those carbon atoms are so close to each other. And that thing makes ADNR so strong. The ADNR can be the base element for the room- or high-temperature superconductors. The superconducting wire can travel inside the ADNR tube. 

The ANCR nanotubes can form a new type of armor. The  ADNR nanotubes can form a structure that looks like gold neckless. But the ANCR nanotubes form this ring. And that thing could someday work as one of the hardest armors in the world. 

The LK-99 was not superconducting at room temperature. But if it is possible to create a superconductor that can transport electricity without resistance in temperatures -99 to zero Celsius that kind of material can be promising for the next-generation quantum systems. The idea is that the high-temperature superconductor requires pressure to stabilize its structure. 

In that kind of system, the pressure system can adjust superconductivity. The material can be in the two-camber box. When superconductivity is needed the pressure system increases pressure in that chamber. When superconductivity must be cut, the pressure system decreases pressure in the chamber where the superconducting material is. 


https://scitechdaily.com/checkmate-quantum-computing-breakthrough-via-scalable-quantum-dot-chessboard/?expand_article=1


https://en.wikipedia.org/wiki/Aggregated_diamond_nanorod

Sunday, September 3, 2023

The Chat GPT is a pathfinder but in the future, smaller and more specific AI systems change the game.

   The Chat GPT is a pathfinder but in the future, smaller and more specific AI systems change the game. 


The Chat GPT,  Bing, and many other AI-based chatbot versions are massive systems that should fit every situation. The problem with common AI is that these kinds of systems require lots of capacity, and there are lots of sources that those systems must use. 

This thing means that the trustworthiness of sources is problematic. The reason for this thing is that the AI doesn't think. It collects data by following certain parameters. That thing makes those systems vulnerable in cases where they should search for information that is not very common. 

The smaller-size specific AI-based systems that can use the same engines with Chat GPT and Bing are more suitable for things like scientific writing. The AI is an ultimate tool if it has a pre-programmed list of trusted and estimated sources. If a writer wants information about some very uncommon things like quantum mechanics. The AI can use sources. That passed scientific estimation. The results are best in business.




The limited AIs can act as independently operating modules in the networked AI-based systems. In that case, those limited AIs act as event handlers for the common AIs. 

Maybe we think that those small AIs operate independently. But the fact is that those independently operating smaller AIs can used as event handlers. That system is connected with bigger AI:s. In that model, those limited AIs can form independently operating module networks, which makes the common AIs more powerful, and accurate than ever before. 

Those independently operating limited AIs can network below the Chat GPT style AIs. The idea is that the limited AIs can form the entirety under the common AI control. This means that the limited AI:s can form a network. That the bigger AIs can be used as event handlers. 


Next comes two examples of limited and powerful AIs. Those things can turn game changers. 



The Finnish AI predicted very accurately where the wildfire started. 


One of the examples of specific highly accurate AI is the AI that predicts wildfire. The researchers from Finnish Aalto-University have created an AI that can predict wildfires. And that AI has shown its success. In this case, the AI uses parameters like humidity in the air, air temperature, and wind speed. The system also can use statistics about the conditions and places where wildlife is starting. 

Also, things like the frequency of lightning and things that are lightning common along with rain or in dry weather are things, that help to predict wildlife. The system also can follow volcanic activity and how often and in what kinds of conditions people are making fire. If there are no spark arresters in chimneys that increases the risk of wildfire.  In that case, the specific AI follows only a limited number of variables. And that thing makes it very accurate. 


The military AI can predict where the enemy attack comes from. 


The military AI can use variables like how hard the ground is, is there some muddy river bottoms, and other kinds of things to predict the place where the enemy might want to attack. There are also many other variables like enemy vehicles and weapons that can affect to that place.

But if the enemy uses tanks the ground's hardness is extremely important. Another thing that the system must know is how steep the riverbed is. That is important information for tanks. In the cases that they cannot use bridges. 

There are, of course, many other variables that the system must have. But those two things are examples of small, and specific AIs. Those RISCs- AI:s are not as flexible as Chat GPT and Bing, but they are highly accurate. And the limited operational areas make variable handling easier than in some common AIs.  


https://www.dezeen.com/2023/08/24/ai-wildfire-model-firecnn-aalto-university-aitopia/


https://www.aalto.fi/en/news/new-ai-system-predicts-how-to-prevent-wildfires



Tuesday, January 31, 2023

Computing is hardware and software.

 



Powerful computing requires both hardware and software. 


Computing is the combination of hardware and software. Things like powerful artificial intelligence require lots of power. But they can make things like the internet more powerful tools in history. The AI can measure the speed of the internet connection and optimize the result. That it gives for that certain speed. And that makes it more flexible than regular internet. The idea is that the AI uses a similar protocol to PHP. The server drives the AI and it sends the result to the client. And that thing makes it possible to use AI by using slower connections and cheaper platforms like tablets and laptops. 

If we think about AI as a cloud-based solution that interconnects multiple different systems we can model the situation where the dynamic AI will call more platforms to assist it, when it cannot make the solution alone. That kind of system can use the reserves of the CPU of all platforms in the same network segment. In that model the computers share their resources, but only when another computer asks for help. 


When we are talking about AIs like ChatGPT they might be next-generation tools. The thing is that AI can interconnect things like mobile telephones to a dynamic-cloud-based portable computer. And that thing makes them powerful tools. The development of physical systems is important because they allow driving software. So only software is not making the AI. The powerful tool requires data connections and powerful computers that can handle data. 


The new AI bases the human brain. 

"Scheme of a simple neural network based on dendritic tree (left) vs. a complex artificial intelligence deep learning architecture (right). Credit: Prof. Ido Kanter, Bar-Ilan University" (ScitechDaily.com/Building a New Type of Efficient Artificial Intelligence Inspired by the Brain)



Researchers made a new type of AI-based tree-type model. The model is that the system builds the mindmap. Regular computer-based AI uses a linear computing model. But then if the AI creates a mindmap-looking data structure where it can interconnect the databases. The idea is that if the AI gets a keyword like "car", it searches everything that has connected to the car. It finds things like metals fuel and many other things. 

Then, the AI can increase the data mass by searching for things. With a connection with metals. And then if finds mining, mining equipment, etc. The thing that this kind of data system searches depends on the parameters that the AI uses. 


The human brain's purpose is to protect humans in any situation. And that makes them so powerful and flexible. 


The human brain is the most powerful computer in the world. It's flexible and able to make more things than any AI. The human brain uses fuzzy logic. And that makes it a little bit slow. And another thing is that the brain cannot make things like calculations with precise accuracy as fast as some computers. The purpose of the human brain is not to solve mathematical problems. 

The human brain's purpose is to guarantee survivability in all conditions. That is the thing that makes concentration difficult. The human brain's purpose is to observe the environment. We cannot fully concentrate on things like calculations. Our brain sometimes wants to see things, that happens around it. 

This is the reason why we should sit in front of a window. When the brain wants to check that nothing threats us, it can make that thing very fast. If we sit face to the walls. Our brains think that some predator is stalking us behind our backs. The brain is made to protect humans, and it still thinks that it's their mission even if we sit in a safe room and try to solve some complex mathematical problems. 

The human brain uses cloud-based solutions for making operations. Every neuron is an actual miniature brain that can process data alone. And the thing that makes the human brain so powerful is that it can interconnect neurons to structures called virtual neurons. Alone one neuron is not very powerful. But together, they are the most powerful machine in the world. 

Theoretically, we could make AI that is the same way powerful as the human brain. But that thing requires so many databases that it has been impossible. Until the  OPen AI introduced ChatGPT to the audience. The ChatGPT-based system can make those 300 billion databases theoretically quite easily. The miniature microchips called "intelligent sand" can use for making a neural computer that works like the human brain.  But those systems are far away from the power of the human brain. 



https://scitechdaily.com/building-a-new-type-of-efficient-artificial-intelligence-inspired-by-the-brain/


https://webelieveinabrightfuture.blogspot.com/

Sunday, October 16, 2022

Transplanted lab-grown human neurons made a working network in rats' brains.



Next-step to creating medical care for people whose nervous system is damaged is to put the artificial or cloned neurons in damaged neural areas. The researchers successfully transferred lab-grown human neurons to rat brains. And those neurons are making networks. 

Skills are a series of memories. And this technology allows the transplant of memories between humans. That thing makes it possible to transplant memories and skills between people. The only necessary thing is that necessary skills are transferred to those neurons by stressing them by using the EEG. 

That thing makes it possible to hybridize the nervous systems of animals. The ability to create organisms that have a hybrid nervous system makes it possible to make intelligent pets. And increase the intelligence level of other species. 




Intelligent pets could be useful assistants for people who have limitations. But that technology is useful also in the hands of the military. 

But that kind of biotechnology have also threats. That kind of neural hybridization makes it possible to create the human-dogs or so-called "dog soldiers". If the neural structures that make wolves follow their leader and dogs loyal to their masters are transferred to human brains. That thing makes it possible to create people who always follow their orders. 

The idea of dog soldiers or "monkey soldiers" that are willing to follow their leaders always when they want without excuses is a dictator's mad dream. And there is a legend that mythic werewolves are the result of the hybridization of dogs and humans.  

The Soviet dictator Joseph Stalin ordered a man named Ilya Ivanovich Ivanov to research the possibility to create dog- or monkey soldiers. And Ivanov is the person who transplanted the heads of the dogs into another body. That research pleased Nazis and some SS doctors worked with experiments where humans tried to hybridize with dogs or wolves. 

But the success of those experiments was poor. But then researchers found the DNA. And then by using nanotechnology is possible to connect genomes over species. That thing causes things that were pure imagination in the 1940s to turn true. So by using modern technology is quite easy to make human-dog hybrids. 


https://www.quantamagazine.org/lab-grown-human-cells-form-working-circuits-in-rat-brains-20221012/

https://en.wikipedia.org/wiki/Ilya_Ivanov

Image: 

https://www.quantamagazine.org/lab-grown-human-cells-form-working-circuits-in-rat-brains-20221012/

Monday, August 15, 2022

"Caenorhabditis Elegans" and their influence on the research of neural networks.



Elegans worm "Caenorhabditis Elegans" is also an example of why neural networks are so powerful. The 32 olfactory neurons of that worm are connected to 13- 14000 receptors. The neural network of those 32 neurons is taking information from very large areas. And the surface area that delivers information is also important for the neural network. In the case of the elegant worm, the purpose of the network is only to input data to the neural system of that primitive worm. 

Above this text, you can see the neurons of the "Caenorhabditis Elegans". The reason why that worm is not intelligent is that the axons are networked with two main axons. So those neurons have only two states. The more advanced neurons have loop connections to the body of the cells. Or they are forming the loop of interconnected neurons. 

The neurons could have multiple states if it has multiple loop connections in their body. Or the series of neurons are interconnected to a circle. And there is a possibility that the loop of interconnected neurons begins and ends in the same neuron. 

So the olfactory neurons of the elegant worm are the example of the "dummy neural network". The dummy neural network means that the system just collects data from the sensors and maybe sends that data to the screens. Another version of the neural network is the intelligent network. 


There are two main types of neural networks:


1) Passive neural networks which have subtypes: 


1a) Dummy neural network. 


This neural network just collects information from the sensors. 


1b) Intelligent neural networks. 


This neural network preprocesses information before it outputs it. 


2) Active neural networks


Active neural networks are always intelligent. Those neural networks can react to things that they see. In the case of a fire, the system can activate sprinkler systems and order people to get out of buildings. 

That kind of system can detect also things like knives and aim acoustic weapons at the attacker. Or in the cases of the subways, that system can shut down lights in the case of violence. And the security team can use infrared lights in their operations. 

The intelligent network also collects information from the sensors. But there is the preprocessing stage between the output of information. So if we are thinking about surveillance systems that are using the dummy network that system just inputs the film of the surveillance cameras to screens. But the intelligent neural network can also sort the images that the areas where people are more highlighted than areas, there are no people. 

And if there is a person, who seems to want to hide in bushes that system can mark this kind of thing for authorities and security personnel. This is the difference between an intelligent and a dummy network. In those cases the system is passive. It collects information and maybe preprocesses it. But the neural network would not make active actions like using loudspeakers that are telling that the person has the knife or using the acoustic weapon against that kind of target. 


https://designandinnovationtales.blogspot.com/

Friday, March 11, 2022

What is a description of life?



The magnesium germanite would look the same as the neural network. So in some wild dreams, the entire planet that can orbit some kind of pulsar can turn to a quantum computer. The idea of planet-size alien intelligence can be a very interesting thing.  

The structure of magnesium germanite looks similar to the 3D neural network structure. There is the possibility that the atoms in magnesium germanite would be replaced. By using pure magnesium and germanite crystals. Those artificial crystals can be like giant atoms. 

And inside them is possible to install nano-size microprocessors. The nano-size microprocessors are controlling the electricity in the crystal structures. And that thing can use to make the 3D neural network processor. In some visions in the places of those balls are the small bottles, where are microprocessors. 

There would be the nanostructure where are the liquids like mercury can conduct electricity. When the information needs to travel through some tube the miniature magnets will pull mercury that connects those small balls where the microchips are. I just wrote about the quantum processor that is created by using magnesium germanite. So that was the tale about the neural network-based solution for effective microchips. 

The alien lifeform can be the robot as well as computer code

The problem with SETI (Search For Extraterrestrial Intelligence) program is that nobody knows what they should look for? The thing that we should describe before we are searching extraterrestrial lifeforms is: what is a lifeform? Researchers made a couple of models of what the alien lifeform can look like. 


If the physical body is not important. Artificial intelligence can be an alien lifeform. 


Is some kind of artificial intelligence algorithm the lifeform? Artificial intelligence algorithms can change code bites. They can multiply themselves and connect their codes to new algorithms. So those algorithms are connecting their genomes and creating descendants. 

The genomes of artificial intelligence are computer codes and sub-algorithms. So they can act like living organisms. But those algorithms have no physical bodies. 

When we are thinking about silicone-based lifeforms.  We forget the hybrid model of life. There is the possibility that the bones of some organs are made by diatomic cells. And the skin and muscles are carbon-based cells. That thing is one of the visions of silicon-based lifeforms. 

The magnesium germanite caused an interesting idea that the brains of some organisms can be crystal-based quantum computers. That kind of organism can be very fast-thinking. But that requires the spontaneous-forming quantum computer. 

Of course, things like artificial cells can be silicone-based. In this case, the "artificial cell" means the AI-driven factory. The AI-driven robot factory can control robots that are collecting minerals and other raw materials for its 3D printers and robot workers. 

Automatic factories that defend themselves and change the computer codes can be like life forms. So what are lifeforms and what are not? The robots that are making other robots in the robot factory and exchanging information with other robot factories can also fill descriptions of the lifeforms. They are making descendants and changing their genomes. Program codes are genomes of computers and robots. 

Wednesday, October 27, 2021

New artificial intelligence learns by using the "cause and effect" methodology.



Image I


The cause and effect methodology means the AI tests simultaneously the models that are stored in its memory. And when some model fits a case that the AI must solve, the AI stores that model to other similar cases. And in that case, the AI finds a suitable solution for things that it must solve. It selects the way to act that is most suitable for it. The most beneficial case means that the system uses minimum force for reaching the goal. 

The "cause and effect method" in the case that the AI-controlled robot will open the door might be that the first robot is searching marks about things that help to determine which way the door is opening. Then the robot first just pulls the door and turns the handle. Then the robot tries the same thing but it pushes the door. Then the robot can note that the door is locked and find another way to get in. 

But if a robot must get in it might have a circular programming architecture. If the robot cannot open the door by using the methods that are found in the first circle. It will step to the next level and use more force. And then the robot will try to kick the door in or some other way to break it. The idea is that robot always uses minimum force. But the problem is how to determine the case. The robot is allowed to do in cases that it faces the door. 

There are cases where the cause and effect methodology is not suitable. If the alone operating robot would be on ice it cannot test the strength of the ice. But if the robot group is operating under the control of the same AI which operates them as an entirety the system might use the cause and effect methodology. 

There is the possibility that artificial intelligence is located in the computer center. And it can operate radio-controlled cars by using the remote control. So the moving robots are dummies and work under the control of the central computer. There is the possibility that this kind of robot system is someday sent to another planet. 




Image II: 


The model of the large robot groups is taken from ants. The ants are moving robots. And anthills are the central computer of the entirety. 

The cause and effect methodology would be suitable for the groups of simple robots that are operating under the same AI. Those cheap and simple moving robots are easy to replace if they are damaged. And the AI that operates those sub-robots can be at the computer center and control those robots by using regular data remote-control systems. 

The supercomputer that drives AI would be at different capsule or orbiting trajectories. And the simple robot cars are operating on the ground. The system might have two stages. At the first stage. The main computer that orbits the planet will send the instructions to the ground-based computers. Those are in the landing capsules. And then those capsules are controlling the robot cars and quadcopters. Keeping the moving robots as simple as possible. Is making it possible to replace destroyed individuals from the group easily. 


The AI sends the robot simultaneously to the route over the icy terrain. And the robot tells all the time its condition. If the ice breaks under it can send the data to its mates about the strength of the ice. Robots are sending information about their location all the time. 

The system knows the last position of the robot. And the strength of the ice can measure by using the last images of that robot. The system knows to avoid the place where ice collapses. And the next robot knows to avoid that place. That thing means that the cause and effect methodology is suitable for large groups of robots where individual robots are not very complicated. 

Artificial intelligence can operate remote-controlled robots. And that means the robots that are forming the group are simple. They might be more remote-control cars than complicated robots. The central computer that is operating the entirety is intelligent. The reason why those robots have only necessary sensors is that they are easy to replace. And maybe robot factories can make those robots in the operational area. 


()https://scitechdaily.com/ai-that-can-learn-cause-and-effect-these-neural-networks-know-what-theyre-doing/

Image I: https://scitechdaily.com/ai-that-can-learn-cause-and-effect-these-neural-networks-know-what-theyre-doing/

Image II: https://upload.wikimedia.org/wikipedia/commons/thumb/1/1d/AntsStitchingLeave.jpg/800px-AntsStitchingLeave.jpg


https://visionsofbrightfuture.blogspot.com/

What was before the Big Bang (Part II)

 What was before the Big Bang. (Part II) "Our universe could be the mirror image of an antimatter universe extending backwards in time....