Showing posts with label applications. Show all posts
Showing posts with label applications. Show all posts

Monday, June 30, 2025

The new open-source robot is a tool for everyone.



The open source opens a path to open applications. In open applications, the physical tool is the platform that can do “everything and more” as humans. The open application means that the robot itself is a platform that can be equipped with the tools and programs that determine its purpose and work. The man-shaped robot is the tool that can make “all things” that humans can do. The robot body can be remotely controlled, or an independently operating system. 

Macro learning where a robot learns through modules is the tool that makes the robot’s limited computer capacity more effective. The operator uses a system that records the things. That the robot must do in certain situations. When the robot makes something for the first time, the operator creates a macro. And then if there are similar situations the robot can launch the macro independently or ask the controller to make that thing. 

The idea is taken from the text editors and spreadsheets. There is a possibility to record some actions that are used commonly. The macro programming for robots follows the same principles. The thing that makes man-shaped robots very good tools is that they can act as builders, cab, and bus drivers, fighter pilots, firemen and make all dangerous missions. The same robot can change its role in less than a second. The things that separate fireman-robots from bus-driving robots and military operating robots are skills or datasets that the system can use. The operator must only change the dataset for the robot. 

And then that system finds a new role. The datasets or skills are collections of the macros. Those macros are activated when there is a thing that matches with descriptions. This means that when the fighter pilot robot operates things like alarm signals activate certain macros. The open source robots that act as cleaners are a good idea. But people don’t always remember that changing the program makes those robots the tools that can operate as commandos. 

When researchers create robots that they can teach, we sometimes forget one thing. That is, those robots can operate as networks. When somebody teaches or creates a macro for one robot, that robot can spread that macro over the entire network. And here is the problem with the “machine rebellion”. Machines will not rebel. This is the key element in robotics. 

But should we somehow transform that argument? We should say that machines will not rebel autonomously. So, we must not worry about the machine rebellion, but we must be worried about human-controlled machine rebellion. We can imagine a situation where somebody simply buys let’s say million housekeeping robots. Then that person will simply change those robot’s programs. And then that system is ready for combat. 


Robots can be dangerous to humans for two reasons: 


1) They are made to be dangerous. That means that things like combat and security robots can be dangerous. 


2) Robots can turn dangerous if there are some errors in programming. 


All errors that machines and especially computers make are made by programmers. The computer will not be automatically dangerous. Same way robots might not be dangerous if they operate as they should. The problem is that when robots are not programmed with certain accuracy, that makes them dangerous. In the cases where robots refuse to stop their actions, they might turn dangerous. 

There is a possibility that in the case of fire, the robot who works as a house guard denies the firemen's operation. The reason for that can be that these kinds of emergency situations are not determined in their program. So, when firemen come in, the robot can think that they are intruders. The other case can be that the law-enforcement robot has no descriptions of things like umbrellas. That robot can think that those things are weapons. 

In another scenario. Programmers forget to determine green T-shirts. or green balloons for the car’s autopilot programs. That thing can cause an error if the autopilot determines a green balloon as a green traffic light. And that causes a destructive situation. 

In some models, the other civilization can cause the end of some other civilization by accident. The system encoders simply forget to make the breaking protocol to the computer. And then that probe comes to the star system. The AI simply forgets to slow down and then the probe will impact the planet with a speed of about 20% of the speed of light. 

That causes the model that the most dangerous thing in the universe is the type of early Kardashev 2. Or late Kardashev class 1 civilization that sends first probes to another solar system. 

That civilization will not handle that technology yet. Without wormholes, it takes years or centuries to get information from that spacecraft. And if there are some errors in programming that spacecraft can impact the planet. The theoretical minimum weight of that probe is about 10000 tons and if it impacts the planet there is not much left. 


https://www.rudebaguette.com/en/2025/06/humanoid-bots-for-everyone-new-open-source-robot-unveiled-in-the-u-s-makes-advanced-robotics-affordable-for-total-beginners/


Monday, June 16, 2025

Why does an antique chess console beat Chat GPT in chess?



When we think about those antique ATARI consoles from the 1978 model we always forget that they were not as easy to win as we thought. Those chess programs handled every kind of data as numeric. And the Chat GPT-type artificial intelligence handles that game as visual data. This is one of the things that we must realize when we think about this type of case. Those old chess consoles used very straight, linear tactics. The main difference between modern algorithms and old-fashioned computer programs is that those old programs are linear. And it handles all buttons and movements separately. 

So there is actually a chessbook in those chess programs, that it follows. Those old chess programs were more difficult than some people believe. If you were a first-timer in chess that means you would lose to those consoles. They played very aggressive and straight games against human opponents. The system tested the suitable movements for each button separately button by button. Because the program was linear the movements were made in a certain order. In those chess programs, every movement is determined by the program square by square. The programmer determined the movements for every button and every square separately. And that made those programs quite long. 

Those old-fashioned chess programs have a weakness that if something goes wrong they continue by following that line. There are certain numbers of lines that the program can use. And there is also the end of the line. Those programs can use complete tactics. But their limit is that those programs are fixed. They don’t write their databases and models again if they lose. And that makes those old-fashioned consoles and video games boring. When people learn tactics that it uses they can beat those old-fashioned programs. The limit of those video games is seen in action games. There are always the same points where enemies jump in front of the players. 

Then we can think about things like learning neural networks. Those networks can beat all old chess programs quite fast. The problem is that the neural network must see the game of the console before it can win those systems. AI is like a human. It requires practicing and training. Without knowledge of the opponent’s game, the AI is helpless. There are many ways to teach AI to create tactics against old-fashioned programs. The system can use some modern chess programs and then analyze the opponent’s game to create tactics. 

The other way is the system can analyze the source code and create a virtual machine that it can use to simulate the chess console game. But what do we learn from that case where antique consoles beat the modern AI? Without training the AI is helpless as humans. If the AI has no knowledge of how to play chess, it must search all data including movements of those buttons that make it as helpless as humans. 

Those old-fashioned consoles are RISC applications. They are made for only one purpose. Their code is completely serving the chess game. Modern AI is a complicated system. It can also do many other things than just serving the chess game. And that makes those old consoles somehow difficult to wing, at least when the AI can break its movements and tactics. 


https://en.shiftdelete.net/chatgpt-fails-in-chess/




What was before the Big Bang (Part II)

 What was before the Big Bang. (Part II) "Our universe could be the mirror image of an antimatter universe extending backwards in time....