Skip to main content

Do you trust AI?

   



At the beginning of this text, I must say that AI is a computer program. Computer programs are like machines. They cannot handle every problem on Earth. They meant to use for some certain purpose. And if we want to use some AI algorithm outside its operational sector, that thing causes catastrophe. The world is full of algorithms. 

Some algorithms are meant to use for things like collecting marketing information from limited systems. The other AI systems are meant to control physical robots. So if we want to use AI for something. We must make sure that the program is meant for that purpose. 

We must realize that if we want to use the marketing analysis programs for controlling robots that thing would cause disaster. If we want to improve the skills of AI. That requires more complicated code than the AI that has only one skill. Every single skill that the AI has must be programmed to that thing. Machine learning is making independently learning machines possible. 


There are three types of learning machines. 


1) Semi-automatic learning systems. 


Whenever the system faces a new problem it calls the operator.  The operator makes the solution and stores that thing in the memory of computers. 


X) Independently learning machines. 


Those machines can create the databases automatically. And then those systems can automatically connect the database to a certain action series. 


2) Hybride systems


Those systems can make the solution or connections between databases automatically. But if that system cannot find the database that fits the problem it can ask for assistance from the operators. That kind of system can respond to multiple problems. 

Hybrid systems are close to the human way to learn things. If the system would not find a match for the case. It would not know how to respond to the case that it faces. In that case. The system will ask for help in solving the problem from the human operators. 

Whenever the system gets the new answer for problems. That thing increases the data mass that the system can create more connections. And it turns more independent. 

When the system is creating the solution. Or the controller solves a problem that solution stored in the memory of the artificial intelligence. for similar cases. That thing increases the number of skills of the AI. 


There are two types of AI


1) Passive AI. 


That system just collects data and analyzes it. 


2) Active AI.


That system interacts with the real world. The system collects data from the sensors. Then it analyzes that data. And then it sends signals to the communication tool. That tool might be the traffic lights if the AI controls traffic. 

The thing is that AI is not a stand-alone operating tool. The system requires tools like an internet connection or a physical robot for making things. If the AI is interacting with the real world. 

It requires sensors and is connected to the sensors that it can get the data mass that it processes. But the AI needs the tool how to interact with the real world. If it controls things like traffic it needs a connection to the traffic lights. Without that connection, AI does not affect the real world. 


All artificial intelligence programs or algorithms are meant to operate in certain sectors. 


The thing in artificial intelligence is that it doesn't make mistakes. If we are saying that the AI makes mistakes. We can same way say that some regular programs like text-handling tools make mistakes. Every mistake that the AI makes is encoded in its code. 

Another way that causes mistakes for the AI is that the data that the AI handles is somehow disturbed. If the sensor that sends data to AI is corrupted accidentally or in purpose. That means the data flow to the system is not relevant. 

The corruption of the sensor means as an example,  that the camera might be dirty.  So that means the system would not get real information. When we are thinking about the trust of AI, we must realize that we must check every single part of the system. The code itself must be completed and tested. But cable connections and the function of the sensors are the same way the important things. 


https://scitechdaily.com/measuring-trust-in-artificial-intelligence-ai/


Image: https://scitechdaily.com/measuring-trust-in-artificial-intelligence-ai/


https://thoughtsaboutsuperpositions.blogspot.com/

Comments

Popular posts from this blog

The LK-99 could be a fundamental advance even if it cannot reach superconductivity in 400K.

The next step in superconducting research is that LK-99 was not superconducting at room temperature. Or was it? The thing is that there is needed more research about that material. And even if it couldn't reach superconductivity in 400K that doesn't mean that material is not fundamental. And if LK-99 can maintain its superconductivity in 400K that means a fundamental breakthrough in superconducting technology.  The LK-99 can be hype or it can be the real thing. The thing is, anyway, that high-voltage cables and our electric networks are not turning superconducting before next summer. But if we can change the electric network to superconducting by using some reasonable material. That thing can be the next step in the environment. Superconductors decrease the need to produce electricity. But today cooling systems that need lots of energy are the thing that turn superconductors that need low temperatures non-practical for everyday use.  When the project begins there is lots of ent

Black holes, the speed of light, and gravitational background are things that are connecting the universe.

 Black holes, the speed of light, and gravitational background are things that are connecting the universe.  Black holes and gravitational waves: is black hole's singularity at so high energy level that energy travels in one direction in the form of a gravitational wave.  We normally say that black holes do not send radiation. And we are wrong. Black holes send gravitational waves. Gravitational waves are wave movement or radiation. And that means the black holes are bright gravitational objects.  If we can use water to illustrate the gravitational interaction we can say that gravitational waves push the surface tension out from the gravitational center. Then the other quantum fields push particles or objects into a black hole. The gravitational waves push energy out from the objects. And then the energy or quantum fields behind that object push them into the gravitational center.  The elementary particles are quantum fields or whisk-looking structures. If the gravitational wave is

The CEO of Open AI, Sam Altman said that AI development requires a similar organization as IAEA.

We know that there are many risks in AI development. And there must be something that puts people realize that these kinds of things are not jokes. The problem is how to take control of the AI development. If we think about international contracts regarding AI development. We must realize that there is a possibility that the contract that should limit AI development turns into another version of the Nuclear Non-Proliferation Treaty. That treaty didn't ever deny the escalation of nuclear weapons. And there is a big possibility that the AI-limitation contracts follow the route of the Nuclear Non-Proliferation Treaty.  The biggest problem with AI development is the new platforms that can run every complicated and effective code. That means the quantum computer-based neural networks can turn themselves more intelligent than humans. The AI has the ultimate ability to learn new things. And if it runs on the quantum-hybrid system that switches its state between binary and quantum states,