Showing posts with label machine learning. Show all posts
Showing posts with label machine learning. Show all posts

Friday, May 23, 2025

The new AI learns like a human.



The new AI-based machine learning uses technology that mimics human optic view pathways in human brains. This technology is more effective than previous, conventional, convolutional neural networks, CNN-based architecture. "A convolutional neural network (CNN) is a type of feedforward neural network that learns features via filter (or kernel) optimization. This type of deep learning network has been applied to process and make predictions from many different types of data including text, images, and audio.  "

Convolution-based networks are the de-facto standard in deep learning-based approaches to computer vision and image processing, and have only recently been replaced—in some cases—by newer deep learning architectures such as the transformer." (Wikipedia, Convolutional neural network)

The CNN network shares images to squares. And handles it with square-shaped filters. These kinds of systems are effective, but they require a large number of microchips. 

This limits their ability to detect wider patterns in fragmented or variable data. The new technology called visual transformers, ViT, is more effective. It's more flexible and accurate. However, its problem is this. ViT requires more power than CNN. CNN requires an entire data center. So the ViT requires as many data centers as it has layers. 


"In the actual brain’s visual cortex, neurons are connected broadly and smoothly around a central point, with connection strength varying gradually with distance (a, b). This spatial connectivity follows a bell-shaped curve known as a ‘Gaussian distribution,’ enabling the brain to integrate visual information not only from the center but also from the surrounding areas. In contrast, traditional Convolutional Neural Networks (CNNs) process information by having neurons focus on a fixed rectangular region (e.g., 3×3, 5×5, etc.) (c, d). CNN filters move across an image at regular intervals, extracting information in a uniform manner, which limits their ability to capture relationships between distant visual elements or respond selectively based on importance. Credit: Institute for Basic Science" (ScitechDaily, Brain-Inspired AI Learns To See Like Humans in Stunning Vision Breakthrough)


"Lp-Convolution, a novel method that uses a multivariate p-generalized normal distribution (MPND) to reshape CNN filters dynamically. Unlike traditional CNNs, which use fixed square filters, Lp-Convolution allows AI models to adapt their filter shapes, stretching horizontally or vertically based on the task, much like how the human brain selectively focuses on relevant details.

This breakthrough solves a long-standing challenge in AI research, known as the large kernel problem. Simply increasing filter sizes in CNNs (e.g., using 7×7 or larger kernels) usually does not improve performance, despite adding more parameters. Lp-Convolution overcomes this limitation by introducing flexible, biologically inspired connectivity patterns." (ScitechDaily, Brain-Inspired AI Learns To See Like Humans in Stunning Vision Breakthrough)





"Brain Inspired Design of LP Convolution

The brain processes visual information using a Gaussian-shaped connectivity structure that gradually spreads from the center outward, flexibly integrating a wide range of information. In contrast, traditional CNNs face issues where expanding the filter size dilutes information or reduces accuracy (d, e). To overcome these structural limitations, the research team developed Lp-Convolution, inspired by the brain’s connectivity (a–c). This design spatially distributes weights to preserve key information even over large receptive fields, effectively addressing the shortcomings of conventional CNNs. Credit: Institute for Basic Science" (ScitechDaily, Brain-Inspired AI Learns To See Like Humans in Stunning Vision Breakthrough)

And what makes the ViT technology so effective? The ViT means that the signal travels through multiple CNN networks. So the developers use multiple layers of the CNN networks. The system can use the expanding ViT model. There the optical signal travels first in the small CNN layer. Then the CNN layer's size expands. And then it contracts. That makes the CNN layers size or the number of processors. That participates in the operation that looks like the Gauss curve. 

The system can have two CNN layers that play the ping-pong ball with data. Every turn when one of those two layers sends information to the other the other layer uses more power to that problem. Then the system focuses data on one point. Or in the linear model, the system can use multiple layers of the CNNs. That model boosts machine learning but it requires more electricity and enormous data mass. 

The ability to use multiple neural layers to analyze information is the thing that makes ViTs so effective. The thing is that the ViT systems require lots of space. That means they can control robots through the internet. The other version is that millions of compact-size robots can turn them into the ViT network. 


https://scitechdaily.com/brain-inspired-ai-learns-to-see-like-humans-in-stunning-vision-breakthrough/


https://en.wikipedia.org/wiki/Convolutional_neural_network


Tuesday, January 31, 2023

Computing is hardware and software.

 



Powerful computing requires both hardware and software. 


Computing is the combination of hardware and software. Things like powerful artificial intelligence require lots of power. But they can make things like the internet more powerful tools in history. The AI can measure the speed of the internet connection and optimize the result. That it gives for that certain speed. And that makes it more flexible than regular internet. The idea is that the AI uses a similar protocol to PHP. The server drives the AI and it sends the result to the client. And that thing makes it possible to use AI by using slower connections and cheaper platforms like tablets and laptops. 

If we think about AI as a cloud-based solution that interconnects multiple different systems we can model the situation where the dynamic AI will call more platforms to assist it, when it cannot make the solution alone. That kind of system can use the reserves of the CPU of all platforms in the same network segment. In that model the computers share their resources, but only when another computer asks for help. 


When we are talking about AIs like ChatGPT they might be next-generation tools. The thing is that AI can interconnect things like mobile telephones to a dynamic-cloud-based portable computer. And that thing makes them powerful tools. The development of physical systems is important because they allow driving software. So only software is not making the AI. The powerful tool requires data connections and powerful computers that can handle data. 


The new AI bases the human brain. 

"Scheme of a simple neural network based on dendritic tree (left) vs. a complex artificial intelligence deep learning architecture (right). Credit: Prof. Ido Kanter, Bar-Ilan University" (ScitechDaily.com/Building a New Type of Efficient Artificial Intelligence Inspired by the Brain)



Researchers made a new type of AI-based tree-type model. The model is that the system builds the mindmap. Regular computer-based AI uses a linear computing model. But then if the AI creates a mindmap-looking data structure where it can interconnect the databases. The idea is that if the AI gets a keyword like "car", it searches everything that has connected to the car. It finds things like metals fuel and many other things. 

Then, the AI can increase the data mass by searching for things. With a connection with metals. And then if finds mining, mining equipment, etc. The thing that this kind of data system searches depends on the parameters that the AI uses. 


The human brain's purpose is to protect humans in any situation. And that makes them so powerful and flexible. 


The human brain is the most powerful computer in the world. It's flexible and able to make more things than any AI. The human brain uses fuzzy logic. And that makes it a little bit slow. And another thing is that the brain cannot make things like calculations with precise accuracy as fast as some computers. The purpose of the human brain is not to solve mathematical problems. 

The human brain's purpose is to guarantee survivability in all conditions. That is the thing that makes concentration difficult. The human brain's purpose is to observe the environment. We cannot fully concentrate on things like calculations. Our brain sometimes wants to see things, that happens around it. 

This is the reason why we should sit in front of a window. When the brain wants to check that nothing threats us, it can make that thing very fast. If we sit face to the walls. Our brains think that some predator is stalking us behind our backs. The brain is made to protect humans, and it still thinks that it's their mission even if we sit in a safe room and try to solve some complex mathematical problems. 

The human brain uses cloud-based solutions for making operations. Every neuron is an actual miniature brain that can process data alone. And the thing that makes the human brain so powerful is that it can interconnect neurons to structures called virtual neurons. Alone one neuron is not very powerful. But together, they are the most powerful machine in the world. 

Theoretically, we could make AI that is the same way powerful as the human brain. But that thing requires so many databases that it has been impossible. Until the  OPen AI introduced ChatGPT to the audience. The ChatGPT-based system can make those 300 billion databases theoretically quite easily. The miniature microchips called "intelligent sand" can use for making a neural computer that works like the human brain.  But those systems are far away from the power of the human brain. 



https://scitechdaily.com/building-a-new-type-of-efficient-artificial-intelligence-inspired-by-the-brain/


https://webelieveinabrightfuture.blogspot.com/

Friday, October 21, 2022

Machine learning can peer to the future.




We live in an environment that is the singularity of natural and artificial actors. Things like the internet are full of information that can be virtual or it can be real. But the internet has not removed our need to go out for work and keep our wealth at a good level.  While we use things, like activity sensors or GPS the internet collects information on how we act. 

Making a data matrix is an easy thing. If the data collector is the Ai. The system can simply follow the crossroads. And it can observe which way people are turning more often. 

By using AI, is possible to turn things like cell phones into speed-measuring tools. The system must know the distance between two landmarks like traffic lights. Then it must know what object it must follow. The AI can calculate the time that the device uses in that range. And that thing is telling the speed of the vehicle. If there are two cameras at a certain distance from each other. 

The system can connect data that those cameras are sending. When the car passes the first camera the system stores its registration number in the database. And when the second camera takes the picture the time between those two pictures is telling the speed of the car between those measurement points. 

The internet can connect that data as the matrix. That system can use that matrix to predict how we will react in some cases. When somebody says that AI is the most powerful thing in the business environment, I would say that the person is wrong. The most powerful tool is networked AI which can interconnect many systems like the spider's net. 

The AI can collect things like what kind of books person loans from the library. And then AI can compare that data with other customers. If there are very certain types of actors. That thing makes it the AI possible to make a profile of what kind of books, for example, typical economists are reading. 


If the AI has to access those person's working files. It can make the profile of what kinds of abilities have people, who are reading certain types of books. 


When we are using the net, everything that we are doing will store in databases. The information matrix that the networked AI can use is enormous. This makes it possible that AI can create the behavioral matrix of certain people like economists and military personnel. And then the system can create a model of how the average representative of that group acts in certain situations. 

The fact is that if some person would be a military man at a young age, and go to study things like economics when that person is older. That thing allows for making an information matrix of how the person will react to some situations. 

By using this method, where AI collects data during the entire human life, the system can make a model of what kind of solutions, as an example in military service economist makes. The same thing can use to hunt serial killers from the net. 

And, if those people make successful businesses or any other actions. That thing makes it possible to select the most capable people for company and military operation leaders. The system can calculate the average success of a person with certain background and education in every type of environment. 


https://scitechdaily.com/scientists-use-machine-learning-to-peer-into-the-future/


https://fromplatoscavetoreality.blogspot.com/

Sunday, August 7, 2022

In the future, robots can call the emergency center when they need computer assistance, like new programs.




Even if spontaneously learning a computer or algorithm is hard to make it's possible, that in the central computer is the library of the computer codes made for different situations. The man-size robots have limited computer capacity. So for making databases and computer code lighter the robot can "forget" the unnecessary code. 

That means the computer deletes the databases after they are used. And then the robot can load the new algorithms into its memory. The ability to change code in the computer makes algorithms less complicated. And robots are more flexible when they can change the components of their program. 

There is an image, where the robot is using the computer. The technology where the programmer can program robots by sending the program code from the computer screens already exists. The system uses a camera or robot's eyes for reading programming code to the computer's memory. When artificial intelligence detects programming code it resends it to a programming tool. 

The new fast-learning algorithms are fascinating tools. A similar algorithm that is used to translate texts from digital images, can be used for programming robots and computers. There is the possibility that the system reads computer program code to a programming tool similar way texts are read to the translation program. So maybe in the future the robot can sit next to the human worker and read the code lines from the computer screen. 

Human size and human-form robots would be the most flexible tools than anything before. The human-looking robots can operate by using the same tools as humans. And the human-looking robot can turn any aircraft or tank in the world into a drone. But the problem is that those systems still have limited computing capacity. The complicated algorithms make it possible that the system can make many things. But what if the robot can ask for help by using regular mobile telephones or interacting with security cameras? 

In the last case, the robot can send a message to the surveillance camera, and then the system can send new instructions to it from the information screen. And in the first case, the robot can simply use the cell phone for asking new computer codes in the form of text messages. The fact is that the new programming code can send to the robot even in the form of a paper letter if the system uses the camera-based reading systems. A similar system makes it possible. That the translation application can read texts from digital images. 

And the computers can send programming code also to programming tools. There is also possible to create a translation program that translates the normally written orders into the form of programming languages. The system can also use speech through the speech-to-text application. And when the operator orders the robot to walk this message will be translated into the regular programming language. 

These kinds of abilities are making robots more powerful and flexible. The ability to send code lines to the robot in the form of text messages makes it possible that robots can operate almost without trouble in areas where internet connections are bad. 

There is the possibility that the robots of tomorrow can use similar mobile applications to a normal person. And if the robot will get in trouble it can send the text message to the central computer that sends the program code to it in the text message. This kind of system makes it possible that robots can interact with controlling systems in many ways. When robots are in trouble they can simply send a message about the situation in the form of laser rays to surveillance cameras. 

And then the local information screen can flash the needed program code to the robot. Those flashes can maintain less than half a second. And then the robot will resend the code to the system for making sure that is the right thing. Maybe that emergency center for robots is real in the world of the future. 

Making this kind of system effective requires the central computer must, of course, to know the mission. So when the master of the robot is giving orders about what the robot should do that person must just make a phone call to the computer center and say what the robot must do. 


Image:) https://www.dotmagazine.online/issues/ai-intelligence-in-the-digital-age/ai-changing-the-game-for-good/ai-for-network-infrastructure


Tuesday, January 11, 2022

Do you trust AI?

   



At the beginning of this text, I must say that AI is a computer program. Computer programs are like machines. They cannot handle every problem on Earth. They meant to use for some certain purpose. And if we want to use some AI algorithm outside its operational sector, that thing causes catastrophe. The world is full of algorithms. 

Some algorithms are meant to use for things like collecting marketing information from limited systems. The other AI systems are meant to control physical robots. So if we want to use AI for something. We must make sure that the program is meant for that purpose. 

We must realize that if we want to use the marketing analysis programs for controlling robots that thing would cause disaster. If we want to improve the skills of AI. That requires more complicated code than the AI that has only one skill. Every single skill that the AI has must be programmed to that thing. Machine learning is making independently learning machines possible. 


There are three types of learning machines. 


1) Semi-automatic learning systems. 


Whenever the system faces a new problem it calls the operator.  The operator makes the solution and stores that thing in the memory of computers. 


X) Independently learning machines. 


Those machines can create the databases automatically. And then those systems can automatically connect the database to a certain action series. 


2) Hybride systems


Those systems can make the solution or connections between databases automatically. But if that system cannot find the database that fits the problem it can ask for assistance from the operators. That kind of system can respond to multiple problems. 

Hybrid systems are close to the human way to learn things. If the system would not find a match for the case. It would not know how to respond to the case that it faces. In that case. The system will ask for help in solving the problem from the human operators. 

Whenever the system gets the new answer for problems. That thing increases the data mass that the system can create more connections. And it turns more independent. 

When the system is creating the solution. Or the controller solves a problem that solution stored in the memory of the artificial intelligence. for similar cases. That thing increases the number of skills of the AI. 


There are two types of AI


1) Passive AI. 


That system just collects data and analyzes it. 


2) Active AI.


That system interacts with the real world. The system collects data from the sensors. Then it analyzes that data. And then it sends signals to the communication tool. That tool might be the traffic lights if the AI controls traffic. 

The thing is that AI is not a stand-alone operating tool. The system requires tools like an internet connection or a physical robot for making things. If the AI is interacting with the real world. 

It requires sensors and is connected to the sensors that it can get the data mass that it processes. But the AI needs the tool how to interact with the real world. If it controls things like traffic it needs a connection to the traffic lights. Without that connection, AI does not affect the real world. 


All artificial intelligence programs or algorithms are meant to operate in certain sectors. 


The thing in artificial intelligence is that it doesn't make mistakes. If we are saying that the AI makes mistakes. We can same way say that some regular programs like text-handling tools make mistakes. Every mistake that the AI makes is encoded in its code. 

Another way that causes mistakes for the AI is that the data that the AI handles is somehow disturbed. If the sensor that sends data to AI is corrupted accidentally or in purpose. That means the data flow to the system is not relevant. 

The corruption of the sensor means as an example,  that the camera might be dirty.  So that means the system would not get real information. When we are thinking about the trust of AI, we must realize that we must check every single part of the system. The code itself must be completed and tested. But cable connections and the function of the sensors are the same way the important things. 


https://scitechdaily.com/measuring-trust-in-artificial-intelligence-ai/


Image: https://scitechdaily.com/measuring-trust-in-artificial-intelligence-ai/


https://thoughtsaboutsuperpositions.blogspot.com/

Wednesday, January 5, 2022

And then the dawn of machine learning.

 And then the dawn of machine learning.

Image: Pinterest


Machine learning or autonomously learning machines are the newest and the most effective versions of artificial intelligence. Machine learning means that the machine can autonomously increase the data mass, sort the data and make connections between databases. That ability is making machine learning someway unpredictable. And that kind of thing makes the robot multi-use systems that can do the same things as humans. 

The reflex robot is a very fast-reacting machine. The limited operational field guarantees. that there is not needed a very large number of databases. And that means the system must not search the right database very often. That makes it very fast. But if it goes out from its field it will be helpless. 

When we are thinking of robots that can make only one thing like playing tennis they can react very fast in every situation. That is connected with tennis. There is a limited number of databases. And that means the robot is acting very fast. 

When a robot or AI makes the decision it systematically searches every single database. And if there are matching details to observed action. That activates the database or command series that is stored in the database. But the thing that makes this type of computer program very complicated is that when the number of stored actions is increased the system will slow.  

If we want to make a robot that can make multiple actions. That thing requires multiple databases. And searching for the match for the situation in every database takes a certain time. So complicated actions require complicated database structures. Compiling complex databases takes time because there are limits in every computer. And in the case of a street operating robot, the system compiles data that its sensors are transmitting to its computers. 

So the conditions that this kind of system must handle might involve unexpected variables like fog or rain. And for those cases, the system needs fuzzy logic for solving problems. In that case, only the frames of the cases are stored in databases by the system creators. And that system is compiling those frames with the data sent from the sensors. 


The waiter robot can be used, as an example of machine learning.


A good example of a learning machine is the waiter robot that is learning the customer's wishes. The robot will store the face of the customer to its memory. When it asks does the customer wants coffee or tea? Then the robot will ask "anything else". And in that case, the robot can introduce the menu. 

And then the customer can make an order. There are certain parameters in the algorithm. Those are stored in the waiter-robots memory. The robot is of course storing that data in the database. The reason for that is simple. The crew requires that information that they can make the right things for the customer. But that data can use to calculate also how many items the average customer makes after a question "anything else"? 

The robot can also store the face in the database that it can calculate how often that person visits the cafeteria. Then that robot can simply store the orders below the customer's face. And it learns how often a person orders something. If some customer is ordering some certain products always. The robot can send the pre-order to the kitchen. That they can get a certain type of order. When some customers will visit often and order all the time same thing, the robot can start to say "do you want the same as usual? For that thing the system requires parameters how often in a certain time is "often"? That was an example of the learning system. 



Wednesday, December 15, 2021

Machine learning needs stimulus for making solutions.

 

 Machine learning needs stimulus for making solutions. 




"A barren plateau is a trainability problem that occurs in machine learning optimization algorithms when the problem-solving space turns flat as the algorithm is run. Researchers at Los Alamos National Laboratory have developed theorems to prove that any given algorithm will avoid a barren plateau as it scales up to run on a quantum computer." (https://www.lanl.gov/discover/news-release-archive/2021/March/0319-barren-plateaus.php)

Have you heard of a "barren plateaus" problem? The name of that problem is coming from the J.R.R Tolkien book. Wherein the fictive Middle land is the very dry and hot place. The image, above this text, introduces the "barren plateaus" problem very well. 

In the next example, the food and water are the information. If a creature lives in the "fresh plateaus" it can take nutrients from nature. There is a bigger chance to make mistakes. But the nutrient is versatile and finding and testing new things makes the creature in the work. When a creature searches for things from nature there are many possibilities to test which type of vegetables or other nutrient sources the creature uses. Of course, that thing requires sometimes rise to the mountains. 

In that image, the problem is the mountain. The thing is that if the creature lives on the landscape at the higher image. That creature has stimulus. The green landscape offers motivation and the grass is the food and the creature wants to go to the mountain. The grass and water are everywhere and the creature has the motivation to rise to the mountain. And that could be willing to see the farter places. Or maybe the creature wants to get fresh air. 

The lower image introduces the situation. Where the creature lives in the "barren plateau". The water and food are in a pocket or bag and of course, the creature never makes mistakes if that creature wants to get a certain sandwich. The creature knows which pocket that creature can get sausage sandwich and where is the drinking bottle. But sooner or later, the nutrient would turn unilateral. In barren plateaus problem, the creature will get pre-made food. 

So if we are transferring that model to the information technology the creature will get pre-made solutions that fit in certain situations. And that makes this kind of model very limited. The situation is like that creature lives in the desert or "barren plateaus". The supporter brings water and sandwich to a certain point at a certain time. The food is guaranteed but it's always the same. 

And what the creature gets depends on the supporter. If the supporter wants to give the sausage sandwich that is the food. If someday the supporter wants to give the cheese sandwich that creature will get a cheese sandwich. 

When everything is pre-made the creature doesn't want to try itself to find food. There is difficult to make mistakes if some other person makes the food. The same way is in data science. If all problems are pre-solved that thing means that it's very hard to make wrong solutions. 

The term "flatten landscape" means that when the creature is living in "barren plateaus" the limited information sources makes the problems look harder to solve. Because the creature always is at a certain point where the supporter will bring a sandwich and water the creature is not even trying to climb mountains or solve the problem. 


"A barren plateau is a trainability problem that occurs in machine learning optimization algorithms when the problem-solving space turns flat as the algorithm is running".

"In that situation, the algorithm can’t find the downward slope in what appears to be a featureless landscape and there’s no clear path to the energy minimum. Lacking landscape features, machine learning can’t train itself to find the solution". (LosAlamos National laboratories, Solving ‘barren plateaus’ is the key to quantum machine learning)


https://www.lanl.gov/discover/news-release-archive/2021/March/0319-barren-plateaus.php


Image:https://www.lanl.gov/discover/news-release-archive/2021/March/0319-barren-plateaus.php


https://interestandinnovation.blogspot.com/

Wednesday, November 3, 2021

The mixed reality is a powerful tool for many things.



The idea of mixed reality is that this kind of system can combine the real world with VR (Virtual Reality). The simplest idea is to make the robot that sends things what it senses to the controller. The robot's eyes would be the camera systems that are connected remotely to the VR glasses. 

The ears of the robot are microphones. And they are sending data to the gamer's headset. The touch sense is the pressure detector that sends the feel of touch to the data gloves. Then the operator can interact with the physical robot.

And that thing allows making safely many things that are very risky for the people. If we are thinking about the cases like nuclear accidents. And sadly battlefields those robots can use many types of communication systems. If the robot is inside the powerful electromagnetic field that can close the radio communication it can change to use laser-led for sharing data. 

There are many types of interfaces that allow robots to operate very effectively. Those systems might be all-time interactive. That system requires non-stop guiding. But there are also learning systems that are making it possible to teach things to robots. In that process, the operators can use the virtual characters. That means the system would look like a computer game. 

Those kinds of systems are things that are recording the actions that the operators are doing. When the operator is made the action the system asks if the operator is satisfied. And of person is satisfied that thing stores those movements and other actions in the system's database.  

The things that are stored in the computer's database determine the skills of the machine. The speech-to-text applications which can dump those texts to the database or interface allow giving spoken orders to the robots. The database can make by using computer-game-type applications. And then the actions that are stored in the database by using the virtual character can download to the robot. 

So the operator can create the necessary action series to the database and then give the name for that action. That word can be "rescue" or "attack". Then the operator can simply give orders to the robot by using the name of that database. If something is forgotten from the action series. Or there is no match in the database for a certain situation. The operator can change to use intensive control like datagloves and joystick. 

https://scitechdaily.com/upgrading-the-space-stations-cold-atom-physics-laboratory-with-mixed-reality/


https://visionsofbrightfuture.blogspot.com/

Wednesday, October 27, 2021

New artificial intelligence learns by using the "cause and effect" methodology.



Image I


The cause and effect methodology means the AI tests simultaneously the models that are stored in its memory. And when some model fits a case that the AI must solve, the AI stores that model to other similar cases. And in that case, the AI finds a suitable solution for things that it must solve. It selects the way to act that is most suitable for it. The most beneficial case means that the system uses minimum force for reaching the goal. 

The "cause and effect method" in the case that the AI-controlled robot will open the door might be that the first robot is searching marks about things that help to determine which way the door is opening. Then the robot first just pulls the door and turns the handle. Then the robot tries the same thing but it pushes the door. Then the robot can note that the door is locked and find another way to get in. 

But if a robot must get in it might have a circular programming architecture. If the robot cannot open the door by using the methods that are found in the first circle. It will step to the next level and use more force. And then the robot will try to kick the door in or some other way to break it. The idea is that robot always uses minimum force. But the problem is how to determine the case. The robot is allowed to do in cases that it faces the door. 

There are cases where the cause and effect methodology is not suitable. If the alone operating robot would be on ice it cannot test the strength of the ice. But if the robot group is operating under the control of the same AI which operates them as an entirety the system might use the cause and effect methodology. 

There is the possibility that artificial intelligence is located in the computer center. And it can operate radio-controlled cars by using the remote control. So the moving robots are dummies and work under the control of the central computer. There is the possibility that this kind of robot system is someday sent to another planet. 




Image II: 


The model of the large robot groups is taken from ants. The ants are moving robots. And anthills are the central computer of the entirety. 

The cause and effect methodology would be suitable for the groups of simple robots that are operating under the same AI. Those cheap and simple moving robots are easy to replace if they are damaged. And the AI that operates those sub-robots can be at the computer center and control those robots by using regular data remote-control systems. 

The supercomputer that drives AI would be at different capsule or orbiting trajectories. And the simple robot cars are operating on the ground. The system might have two stages. At the first stage. The main computer that orbits the planet will send the instructions to the ground-based computers. Those are in the landing capsules. And then those capsules are controlling the robot cars and quadcopters. Keeping the moving robots as simple as possible. Is making it possible to replace destroyed individuals from the group easily. 


The AI sends the robot simultaneously to the route over the icy terrain. And the robot tells all the time its condition. If the ice breaks under it can send the data to its mates about the strength of the ice. Robots are sending information about their location all the time. 

The system knows the last position of the robot. And the strength of the ice can measure by using the last images of that robot. The system knows to avoid the place where ice collapses. And the next robot knows to avoid that place. That thing means that the cause and effect methodology is suitable for large groups of robots where individual robots are not very complicated. 

Artificial intelligence can operate remote-controlled robots. And that means the robots that are forming the group are simple. They might be more remote-control cars than complicated robots. The central computer that is operating the entirety is intelligent. The reason why those robots have only necessary sensors is that they are easy to replace. And maybe robot factories can make those robots in the operational area. 


()https://scitechdaily.com/ai-that-can-learn-cause-and-effect-these-neural-networks-know-what-theyre-doing/

Image I: https://scitechdaily.com/ai-that-can-learn-cause-and-effect-these-neural-networks-know-what-theyre-doing/

Image II: https://upload.wikimedia.org/wikipedia/commons/thumb/1/1d/AntsStitchingLeave.jpg/800px-AntsStitchingLeave.jpg


https://visionsofbrightfuture.blogspot.com/

What was before the Big Bang (Part II)

 What was before the Big Bang. (Part II) "Our universe could be the mirror image of an antimatter universe extending backwards in time....