Monday, June 30, 2025

The new open-source robot is a tool for everyone.



The open source opens a path to open applications. In open applications, the physical tool is the platform that can do “everything and more” as humans. The open application means that the robot itself is a platform that can be equipped with the tools and programs that determine its purpose and work. The man-shaped robot is the tool that can make “all things” that humans can do. The robot body can be remotely controlled, or an independently operating system. 

Macro learning where a robot learns through modules is the tool that makes the robot’s limited computer capacity more effective. The operator uses a system that records the things. That the robot must do in certain situations. When the robot makes something for the first time, the operator creates a macro. And then if there are similar situations the robot can launch the macro independently or ask the controller to make that thing. 

The idea is taken from the text editors and spreadsheets. There is a possibility to record some actions that are used commonly. The macro programming for robots follows the same principles. The thing that makes man-shaped robots very good tools is that they can act as builders, cab, and bus drivers, fighter pilots, firemen and make all dangerous missions. The same robot can change its role in less than a second. The things that separate fireman-robots from bus-driving robots and military operating robots are skills or datasets that the system can use. The operator must only change the dataset for the robot. 

And then that system finds a new role. The datasets or skills are collections of the macros. Those macros are activated when there is a thing that matches with descriptions. This means that when the fighter pilot robot operates things like alarm signals activate certain macros. The open source robots that act as cleaners are a good idea. But people don’t always remember that changing the program makes those robots the tools that can operate as commandos. 

When researchers create robots that they can teach, we sometimes forget one thing. That is, those robots can operate as networks. When somebody teaches or creates a macro for one robot, that robot can spread that macro over the entire network. And here is the problem with the “machine rebellion”. Machines will not rebel. This is the key element in robotics. 

But should we somehow transform that argument? We should say that machines will not rebel autonomously. So, we must not worry about the machine rebellion, but we must be worried about human-controlled machine rebellion. We can imagine a situation where somebody simply buys let’s say million housekeeping robots. Then that person will simply change those robot’s programs. And then that system is ready for combat. 


Robots can be dangerous to humans for two reasons: 


1) They are made to be dangerous. That means that things like combat and security robots can be dangerous. 


2) Robots can turn dangerous if there are some errors in programming. 


All errors that machines and especially computers make are made by programmers. The computer will not be automatically dangerous. Same way robots might not be dangerous if they operate as they should. The problem is that when robots are not programmed with certain accuracy, that makes them dangerous. In the cases where robots refuse to stop their actions, they might turn dangerous. 

There is a possibility that in the case of fire, the robot who works as a house guard denies the firemen's operation. The reason for that can be that these kinds of emergency situations are not determined in their program. So, when firemen come in, the robot can think that they are intruders. The other case can be that the law-enforcement robot has no descriptions of things like umbrellas. That robot can think that those things are weapons. 

In another scenario. Programmers forget to determine green T-shirts. or green balloons for the car’s autopilot programs. That thing can cause an error if the autopilot determines a green balloon as a green traffic light. And that causes a destructive situation. 

In some models, the other civilization can cause the end of some other civilization by accident. The system encoders simply forget to make the breaking protocol to the computer. And then that probe comes to the star system. The AI simply forgets to slow down and then the probe will impact the planet with a speed of about 20% of the speed of light. 

That causes the model that the most dangerous thing in the universe is the type of early Kardashev 2. Or late Kardashev class 1 civilization that sends first probes to another solar system. 

That civilization will not handle that technology yet. Without wormholes, it takes years or centuries to get information from that spacecraft. And if there are some errors in programming that spacecraft can impact the planet. The theoretical minimum weight of that probe is about 10000 tons and if it impacts the planet there is not much left. 


https://www.rudebaguette.com/en/2025/06/humanoid-bots-for-everyone-new-open-source-robot-unveiled-in-the-u-s-makes-advanced-robotics-affordable-for-total-beginners/


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

The new open-source robot is a tool for everyone.

The open source opens a path to open applications. In open applications, the physical tool is the platform that can do “everything and more”...