Showing posts with label neural network. Show all posts
Showing posts with label neural network. Show all posts

Tuesday, September 5, 2023

The neural network is the mind of the tiger. The neural network is the mind of the tiger.

 The neural network is the mind of the tiger. 


The difference between traditional computers and neural networks is the role of the CPU. In traditional computers, the CPU (Central Processing Unit) makes all work, and that causes limits in computing. In neural networks, the CPU preprocesses data that senses send into it. That means the CPU will just recognize the situation and then resend it to a sub-processing unit. The receiving sub-system must have responsibility for that kind of case. That means the subsystem has databases where there are reactions for certain types of events. 

In neural networks, all sub-networks use sorted data. And they are meant to respond to things that senses are sending to the neural network. Each of the subnetworks has certain operational or response areas. And the CPU is like some kind of router. The system finds certain details from the data. Then it can route the information to the right subnetwork. In real life, neural networks could be multiple subnetworks. They can act as AI-based event handlers. 

The complete neural network requires that all sub-networks have also a physical system that runs them. In that case, the system has multiple workstations that run the AI-based systems. In that case, the system will decrease the use of the top CPU. If the system uses multiple independently operating workstations. At can handle multiple problems at the same time. Error detection happens when the neural network uses two or more independently operating data handling units at the same time. 






In an ideal situation, the CPU of the neural network is working as a router that can send information to independently operating subsystems. The independent subsystem has a physical computer and its CPU. In that case, the common CPU of those multiple networked workstations sends a message to a certain subunit, and then the AI-based subunit will make the mission and the system sends the data back to the CPU. 

The shape of the neural network is free. The neural network can be a drone swarm. Or it can be the lots of workstations in some warehouse. The neural networks can collect information from multiple sources. And the neural system's power is that it shares data between multiple computers. The neural network doesn't necessarily learn anything. But it can collect data from very large areas. 

The learning neural network follows some parameters for making its duty. The thing that can be the parameter is the drone that operated the longest time in a certain area. The system can store information about the routes and other things that drones use. Then it can multiply the longest-standing drone's operational profile to another drone. 

And of course system must have multiple parameters like when drones were seen, and where the first impact came. The system interconnects the data from all members of the drone swarm. And then it tries to create an ideal flight path to the target. Here we must remember that the same drone that used to deliver pizza can deliver at least hand grenades. The neural network can interconnect multiple systems like surveillance cameras, satellites, and other things like drone swarms. 

Tuesday, August 29, 2023

Integrated AI is the tool of the next generation.

 Integrated AI is the tool of the next generation. 


Do you know what a kernel is? Kernel is the border computer program that connects hardware to software in computers. The kernel is the code that microchips and other components use when they interact with each other. 

The integrated AI means that the AI is integrated with microchips or their control program. That thing means that the kernel turns into AI. And that could mean that the Chat GPT type program is programmed straight in microchips. 


That allows the computer to program other computers and robots. And that ability makes that kind of combination super hot. 


The Chat GPT is a chatbot. That means it will not control the robots themselves. But that system can create program code that can control robots when they are operating. This ability makes robots flexible and adapts tools because the AI can modify their control codes anytime. That it can adapt to the environment. 

The flexible neuro-network is like brains. The thing, that differs human brains from animals is the number of different skills. Computers are acting like human brains. They store data that they need in records or databases. The system can make it easier to find necessary data records sorting them under certain topics. 

The neural network-based computer architecture works similar way as one computer. That stores data on a hard disk. In neural network-based architectures, the system uses multiple computers and hard drives to store information. In the ideal case, all databases have their own physical devices. That thing makes it possible for the system to be effective. It can share the responses with multiple CPUs in multi-level coordination. 


There are no limits to the neural network's size.


The multi-level coordination is important for physical systems like robots. 


When the system switches on all databases are telling the CPU that handles their operations where that CPU finds them. That data is stored in routers.  There might be multiple CPUs. And each of them has responsibility for one skill area that the robot has. There could be a CPU with certain database connections for emergencies. There is a hierarchy in the CPU network whose purpose is to save the robot if it slips on the floor. 

The artificial reflexes require an integrated- microchip-based database that can help robots react to things like slipping on the floor.  


So the "skills of the left hand" are under one topic. Like in human brains, all information is stored in cells that are databases. In computer-based neural networks, the computer stores all information in databases with the same purpose as memory cells. When the system requires some skills it must first recognize the situation. The computer searches databases that information matches with situations that sensors are telling it. And can give responses for actions that happen around the computer. 

The multi-layer system is the ultimate tool for that kind of system. In a multi-layer system CPU that controls senses is different than the CPU that controls movements. The interconnected database network might be millions of databases in subnetworks. In an ideal situation, each database has its physical processor that interconnects records with each other. 

In a multi-layer system, there must be some kind of artificial reflexes. When a robot slips on the floor there must be data that the robot can use immediately when it sees that the acceleration sensors of the robot are out of balance. That data that makes the robot put a hand to stop falling is easy to store straight in the microchips. 

When the system requires some information or code it must find the right databases. If the computer must search all databases one by one that takes time. So there must be a better mode. 

In the active model, each database will report for the CPU (Central Processing Unit) that it's ready. During that process database can tell that "I'm database 3 that controls left-hand movements, and the CPU can find me behind connection number 3". 

That thing makes routers route data between the CPU and the left-hand's database. In network-based systems, there might be thousands of databases. In each database are thousands of records. There is a possibility that every database has its central processing unit that can search the right records. And then those databases can interconnected. 


Monday, February 14, 2022

The network-based solutions can help to research AI.



When we are thinking the AI as a tool. That analyzes things there should be something that confirms the solution. The confirmation of results can be made by using multiple data handling tools. Those data processing tools can form the network of the multiple independently operating units. That works with the same problem. When data processing units are making solutions. They send that thing to the central processing unit. 

And then if there are no errors or differences that solution would have no mistakes. But there is the possibility that some data processing units would get a different solution than the main group. And the system should inform the user that there is some kind of anomaly in the solutions. The information that the user gets. 


The model of modern AI is that there are two layers of AI. 


1) The AI-based operating system can independently determine the retake of the code of the linear AI-based software. The AI-based operating system can choose the algorithm that is in use in each case.

If the AI would pick up things like stones there is an error level that makes the operation acceptable. The AI solution tells that 12 stones must pick and put in a bag. And then the operating system would make the software run 12 times for collecting enough stones. 

2) The AI-based solution that controls things like robot hands or data collection. The AI-based software is the solution that is running on the AI-based operating system. The solution would see that if the tool that it uses is not fine. That means that the system might first choose the shovel. 

But if the stones are too heavy. The software might ask to use another tool like a forked stick hand. In that case, the operating system just turns responsibility for the action to another hand. Or it can call another robot to the place. 

The learning system measures the weight of the stone. And then it makes the query which machine has enough powerful tools for picking up those stones. The controlling AI can search that can the system make its job. And if there are not strong enough robots it can ask permission to call assistance from other companies. The system can tell that "there are about 100 kg stones that must put the kevlar bag". And then the system can ask the machine that can make that thing. 


The operators must determine the error level. In this case, there must be a certain part of the processing units that gets a certain value. 


Might be an example like that 2/3 processing units have this kind of solution. But because 1/3 has a different solution. That means there is possible that there is a mistake. And then the user can use the solution but the system can retake that operation. The thing is that the neural networks allow control and observe the operations of the AI from outside it. And that thing can use to create more effective and powerful code. 

The outside system would observe the functions of the main system. And that information can contain data like. Is there some kind of problems with memory handling? Or are there some unnecessary loops? And that thing makes it possible to create more effective code. The thing is that the system that drives the program code of the AI might also use the AI. That means the system can independently retake the part of the code if the operation is not successful. And that thing is making those systems extremely powerful. 


The fuzzy logic is making the AI effective. 


When we are thinking about the case that the AI should calculate things like lorries. There are two ways to make that thing. The AI can use certain logic that makes it slow. A certain logic means that the AI has images of every lorry in the markets and then the system would take the image of every car. And then the AI can compare the images that it takes with images stored in the database. 

That thing takes time. Another way is to take images by benefiting the CCD camera. Or the CCD camera's electro-optical element. When the system takes an image it can compile it with the matrix images pixel by pixel. In that case, there is a port image of the typical representers of every type of vehicle. What the system can face on the road. 

The system compiles the image by using the port image. When a certain number of pixels are matching the system would recognize that the vehicle is a lorry. If the system would need deeper analysis, that thing can send an image forward of the data line to confirm the vehicle's type and owner. In the case that we want to make a car that drives automatically, we must remember that the bus that comes from the stop has the right to come first from the bus stop. 

In the same way, the emergency vehicles must go first if they are at the emergency drive. The AI can have images of this kind of vehicle that need special attention and actions. So when the CCD camera makes the match with those things it can slow down the vehicle. The fuzzy logic means that there is a parameter like 80% of the data must match for some case. And that thing allows the AI to make something. 

If the system uses precise logic even the different text or color in the object can make the system react the wrong way. The fuzzy-logic means that the system cannot always get the precise same data from nature as from a controlled environment. The cars that the system calculates can be dirty. Or they can have something like ski boxes on the roof. And the system must separate and sort them by using images that the CCD camera takes from traffic. That case gives information about what type of vehicles are traveling on the road each time. 


Image:https://scitechdaily.com/images/Artistic-Artificial-Intelligence-Concept-1536x1152.jpg


https://thoughtsaboutsuperpositions.blogspot.com/

Saturday, February 12, 2022

Large neural networks have a larger number of abilities.



Above this text is the model of the neural network system. Every point in that diagram is the ability or skill that the network has. And every line in that image is the connection between those skills. So every point in that image is the database. And the line is the database connection. Every single database can have a limited number of connections. Everything that the robot must do is stored in databases. And there is a series of actions that are connected to a certain table. 

The neural network can be physical. It can be the network of physical systems like computers and surveillance cameras. 

Or it can be virtual. The neural network can be the network of skills. And whenever the system learns a new skill it expands. The network of skills means that every single action requires sub-actions. The sub-actions like turning the steering wheel can use in many places. Same way robots can turn cars, tractors, and forklifts. So that means the robot can use the same skill to turn every vehicle that has steering wheels. The virtual system means a large number of networked databases. 

The things like red traffic lights can act as triggers that launch a certain reaction. So when a robot sees red traffic light. That thing launches certain action. A great number of databases means that AI can search solutions more effectively. If the system already knows that the screwdriver is the tool it can search that thing faster. In that kind of operation, the robot goes to the toolbox. If there is no screwdriver it might ask alternative places. So the operator tells that maybe that tool is on the table. So the robot searches that place. If that screwdriver is there, the robot would involve the table to list places where the screwdriver could be. 

Of course, the robot can search automatically screwdriver from the floor. The AI uses image recognition for separating the objects. If there is no screwdriver the AI can search the image of that tool from its memory and ask is this screwdriver? The human operator can check that there is the right image in the memory of the AI. If the image is wrong the operator can change it to right. The robot is not necessarily physical. It can be the algorithm that collects data from the Internet. 

The learning system means the number of databases increases. And the number of connections between them also increases. That means that if there is a lot of databases at the beginning of the independent learning that system can use more connections at the beginning of operations. And that makes the system can use databases versatile if there is a large number of data for use at the beginning of the self-learning. 

Self-learning or autonomous learning process means that the system can increase the number of databases and database connections without human assistance. There is a theory that all databases on the internet can interconnect to one large database entirety. And that thing makes it possible to create the ultimate artificial intelligence that can interconnect all computers and other systems to one entirety.  

The self-learning process can turn more effective. If the databases are pre-sorted by using certain parameters. In that model the database groups are sorted under topics like "visiting shop", cleaning the house" etc. Those databases involve what kind of things the AI must use for completing the mission successfully. 

So when an AI-controlled robot is taking the order to go shopping it can interconnect the databases that are involving things how the robot should walk to the shop. Where are traffic lights, how must react to traffic lights etc.? The thing is that robots can use the same databases for multiple uses. The knowledge of how to react when traffic lights are red. Can use in all missions like walking, driving cars, and other things. 


Image)https://www.quantamagazine.org/computer-scientists-prove-why-bigger-neural-networks-do-better-20220210/


https://thoughtsaboutsuperpositions.blogspot.com/

Wednesday, February 2, 2022

The IBM Unveils 127 qubit quantum computer.


That quantum computer is a big step to making fully commercial quantum computers. And those quantum computers would open new and bright visions for military and civil purposes. Quantum computers can hack any code that is made by using binary computers. And that thing means that they are causing a need to remake the entire security of the Internet. The history of quantum computers would repeat the history of binary computers. 

At first quantum computers are the systems that are locked at the calculation centers. But then they will turn to every-man machines and perhaps quite soon the regular personnel computers will turn to quantum computers. Things like programming language for quantum computers are bringing more users to them. The new programming language for quantum computers is making them easier to use. 

And user-friendly applications like AI-based code translators are bringing quantum computers to more users. That translator means that well-known computer code like C++, Python, or Jave can turn to quantum computers. And the new quantum programming language will benefit the abilities of the quantum systems. So while we are waiting for the personal quantum computers we can use quantum systems remotely. 

That thing makes it possible. That users can rent the time from the quantum computer centers. And that thing brings more money to the quantum computer projects. More projects and more solutions are bringing the quantum systems more common. But also more powerful and more multi-use. 

Quantum computers are only platforms. The abilities of quantum computers are determined by program code. And those systems might make the revolution in the civil and military systems needed to handle big entireties. The fact is that nobody expects that portable quantum computer have the same capacity as data-center-based fixed systems. 

In the same way. We don't think that a laptop is the same way powerful as a supercomputer.  But when we remember the advantage of supercomputers in the early 1980's systems had 1 mt. memory. We can say that modern laptop are far ahead of those computers. And the same thing will happen with quantum computers. 



The quantum network is at the door. The idea for the nanotube-based quantum network took from the nuclear test "Ivy Mike". 


When the first full-scale thermonuclear weapon detonated at the Marianna archipelago radiation from that bomb was conducted to the sensor by using a vacuum tube. That allowed those particles to reach the sensor before the particles that are traveling in the air. And that made it possible to observe the particles that were released from the hydrogen bomb. 

The quantum wires will protect against outside radiation effects. And then those nanotubes will be covered by electromagnetic fields. The electromagnetic fields are the thing that is covering the qubit against outcoming effects. The qubit could be an electron that rides with the laser rays in those nanotubes. So that thing makes it possible to create a system that connects quantum computers by using qubit-based connections. 

The quantum network can be a series of nanotubes. There might be a laser ray and a powerful electromagnetic field around those tubes. The purpose of those things is to minimize the outcoming errors that are affecting the nanotube. There would be an absolute gas vacuum in that tube. And that makes qubits possible to travel through that tube. '

Because the quantum computer sends photons through the vacuum. They are reaching sensors faster than photons that travel in the medium. The other way is to make the laser ray and the photon would ride in the tube in the fully controlled electromagnetic environment. 


https://www.eejournal.com/article/ibm-unveils-127-qubit-quantum-computer/


https://en.wikipedia.org/wiki/Ivy_Mike


Image 1:) https://www.eejournal.com/article/ibm-unveils-127-qubit-quantum-computer/


Image 2:) https://en.wikipedia.org/wiki/Ivy_Mike


https://thoughtsaboutsuperpositions.blogspot.com/


What was before the Big Bang (Part II)

 What was before the Big Bang. (Part II) "Our universe could be the mirror image of an antimatter universe extending backwards in time....