Wednesday, June 18, 2025

The AI removes trainees from workplaces. And that is not a good thing.



The AI doesn't take your jobs. It denies new workers to come to the field. That causes questions about how workers can improve their skills. The AI doesn't take your jobs. It denies new workers to come to the field. That causes questions about how workers can improve their skills. What if humans especially ICT workers lose their basic skills? What if all programming turns into cases? Where does the system worker just give orders to the AI? Using normal language. Then the AI follows instructions like a human coder.  

But the main thing is that the requirement of effectiveness and ultra-capitalism forces company leaders to make that choice. They choose AI instead of hiring programmers. And that is one of the biggest problems in the ICT business. But then we must remember that today is the end of the road. AI is the solution that is the next step into the continuum where things like encoding software are outsourced to countries like India. Western coders didn’t get jobs because it was cheaper to hire experienced workers from India or some other far-east countries. 

When companies outsource encoding to the Far-East they become vulnerable for that work. That means that those workers are working in countries that are members of BICS. So those workers can work under the control of intelligence or other authorities who order them to make spyware and other malicious tools. We are people who live in Western democracy. We learned that if somebody acts as a spy that person will be arrested. The same way we believe that if somebody hacks into some country, we must just request that those authorities arrest those people. We didn’t expect that those hackers worked for the Chinese intelligence service or government. They were protected by the government. 

The next logical step is that the AI starts to work with codes. And that takes jobs from humans. This thing means that the programmers lose their basic skills, and without basic skills, they have no advanced skills. Programming is like learning things. If we compare the programmer’s advance with going to school, we must realize that every person in the world writes their first words. Before that, they must learn to read. And every single person reads their first word once. Before we can learn advanced mathematics, we must learn the basics. So we all calculated once 1+1=2. If we don’t learn the basics we cannot learn anything new. 

And that is the beginning. Without the first class, we will not learn anything. In the same way. When we want to learn to encode or make computer programs, we must learn basic skills. Without basic skills, there is no ability to learn advanced programming. If companies don’t offer jobs for trainees that means people cannot learn skills that they need at an expert level. 

That thing has a reflection across the entire ecosystem. If the boss doesn't know what codes are made, that can cause problems. For observing and surveilling henchmen work the boss needs to know what they do. And if the boss doesn’t know what henchmen should do, that causes a situation where somebody can inject malicious code into the program. In the time of modern communication, the system needs less than a second to infect the target. 

There are articles. About a North Korean mobile telephone. That was secretly smuggled to the West. Those mobile telephones are the Orwellian dystopic nightmare. By connecting those mobile telephones with the AI the leader can surveillance every single citizen 24/7. The AI can tell if somebody uses forbidden words. 

And that causes a question: what if somebody slips that kind of mobile telephone into some general’s office? Maybe some key person’s family member will win a mobile telephone from the net. And then there are the surveillance programs. There is also a possibility that regular hackers have those telephones, and they copy and customize that software for their own purposes. 

Every expert has been a trainee once in their life. The requirement in working life is when a person comes to the workplace, they must know everything in the first minute when they open their computer. This is the route that offers the opportunity to AI. The AI learns things in minutes. Another thing that we must realize is national security. If we outsource critical encoding to some far-east country that means we cannot control what those people do. They can give the critical code to hackers who work for China or North Korea. In those countries, the government is the ultimate authority. There is no way to say anything against their orders. If the government orders people to work as hackers that means the person has no chance to say against that suggestion. 

And those systems can turn very dangerous in those cases. In the cases that somebody makes a back gate to the system, that offers a route even to critical infrastructure. What if somebody orders all Chinese-made routers and other network tools to shut down? That can cause problems in everyday life. And if one wrong microchip slips into the computers that control the advanced stealth fighter, that component can deliver computer viruses into the system. Or that kind of tool can steal vital data from the system. Every time when something is done outside the watching eye, there is a possibility that somebody will make something that can cause very big trouble. Things like microchips equipped with malicious software are tools that can break national security in large-scale areas. 



Tuesday, June 17, 2025

Privacy against security.



When we talk about security we must ask “whose security that act serves”. We know that the internet is a tool that offers the greatest propaganda platform that we have ever seen before. The net is full of tools that are used to prove that writers are humans. The AI-based applications offer the possibility to share data into even billions of homepages and social media applications in seconds.

Data that AI creates can block almost any private server on our planet. And that makes it possible to use the AI-created data to block entire web services. Confirmation that people who use the net are who they claim is one of the things that used to argue that people should use their real identities on the net. The anonymous use against the confirmed use are things that both have their supporters. Anonymous use allows users to make reports about corruption and many other things.

And that makes people support that way of using the net. On the other side, anonymous use offers change for cyber attackers and disinformation deliverers to operate on the net. Things like AI agents can operate in the targeted networks, steal information, and deliver it to other users. That kind of thing can be put in order by forcing people to confirm that they are humans. And then tell who they really are. But that is similar to the U.S. firearms laws. Those things don’t stop propagandists or psychological operators from sending their fake information to the net. 

Those people, especially if they operate under state control to operate. Those disinformation actors can use fake or stolen identities. And the authorities can confirm those faked identities. If we want to deliver propaganda as an example from Russia, we must have computers in some states like Finland. Then we can take the VPN to that computer from Moscow and then start to deliver that information to the net by using that remote computer, which is located in Finland. So, in that case, we would ride with the Finnish networks. The operator who is based in Russia tends to stay away from Western countries. The assistants make everything, and that makes it possible for the person who knows something to stay under control. 



Bye bye algorithms.





We are waiting for the new step in the AI development and research process. Many big technology bosses say that this is the end of algorithms. And the next step is self-learning AI. That system can communicate with robots and all other systems. The self-learning system can learn in two ways. It can create new models that it uses in certain situations. Or it can connect a new module into itself. The reason why advances in AI go to self-learning systems is simple. 

New algorithms are very complicated. And their training requires so much time that the self-learning models are better. The thing that makes this kind of thing very complicated is that the new AI must operate in larger areas. They must control things like street-operating robots. So they need a more effective way to learn things. The street-operating robots can use platforms that look like computer games to learn how to cross the roads, and where those robots find things like apples if they go to shop for their owner. But then those robots must face unexpected things. 

Robots can share their mission records with the entire system. And that helps to develop methods on how to operate in natural situations. Basically, the difference between a learning system and a normal system is that the learning system can create new models and then compare the original model with that new model. There are parameters that determine which way to act is better. And if the new model is better, it replaces the old one. This means that the fixed model turns into a flexible model. That model lives with its environment. 

That thing is the AGI or artificial general intelligence. That kind of AI is everywhere, and it can connect multiple different systems that seem different under one dome. The biggest difference between AI and modern algorithms is that the system can bring new data from sensors to data flow that travels in the system. The AGI is a system that might be "god-like" but if that system cannot create genetic codes like manufacture the DNA it might have no ability to control living organisms. However, the system has many ways to manipulate evolution. 

The AGI can make couples that have certain skills. The fact is that dating applications are effective for dating. And it's possible that the AGI can also make it possible to select perfect spouses. So people who are not "perfect" leave without a couple. And that means only people who are suitable, or similar can make descendants. This causes segregation and loss of diversity. 

And that is a sad thing for humans. Self-learning AI is a tool that can learn from its mistakes. It learns what to do, and what it must not do. The thing is that the self-learning AI is the new common tool that can make almost everything. The system learns like humans. And that makes it the so-called AGI. One tool fits all. The system can control things like robots.

Robots can collect data for that system. The AGI works like this. One robot sits on a chair and then the teacher teaches things for, and through that thing. Then that robot shares new things across the entire AI and network. For training that kind of system, a lot of information. And companies like Meta have that data. And AI makes it possible to create things like AI agents that sneak and observe what happens in the network. Robots can learn from other robots. When one robot makes a mistake, it scales over the network. Other robots must know that they don’t make the same mistake again. 


Monday, June 16, 2025

Why does an antique chess console beat Chat GPT in chess?



When we think about those antique ATARI consoles from the 1978 model we always forget that they were not as easy to win as we thought. Those chess programs handled every kind of data as numeric. And the Chat GPT-type artificial intelligence handles that game as visual data. This is one of the things that we must realize when we think about this type of case. Those old chess consoles used very straight, linear tactics. The main difference between modern algorithms and old-fashioned computer programs is that those old programs are linear. And it handles all buttons and movements separately. 

So there is actually a chessbook in those chess programs, that it follows. Those old chess programs were more difficult than some people believe. If you were a first-timer in chess that means you would lose to those consoles. They played very aggressive and straight games against human opponents. The system tested the suitable movements for each button separately button by button. Because the program was linear the movements were made in a certain order. In those chess programs, every movement is determined by the program square by square. The programmer determined the movements for every button and every square separately. And that made those programs quite long. 

Those old-fashioned chess programs have a weakness that if something goes wrong they continue by following that line. There are certain numbers of lines that the program can use. And there is also the end of the line. Those programs can use complete tactics. But their limit is that those programs are fixed. They don’t write their databases and models again if they lose. And that makes those old-fashioned consoles and video games boring. When people learn tactics that it uses they can beat those old-fashioned programs. The limit of those video games is seen in action games. There are always the same points where enemies jump in front of the players. 

Then we can think about things like learning neural networks. Those networks can beat all old chess programs quite fast. The problem is that the neural network must see the game of the console before it can win those systems. AI is like a human. It requires practicing and training. Without knowledge of the opponent’s game, the AI is helpless. There are many ways to teach AI to create tactics against old-fashioned programs. The system can use some modern chess programs and then analyze the opponent’s game to create tactics. 

The other way is the system can analyze the source code and create a virtual machine that it can use to simulate the chess console game. But what do we learn from that case where antique consoles beat the modern AI? Without training the AI is helpless as humans. If the AI has no knowledge of how to play chess, it must search all data including movements of those buttons that make it as helpless as humans. 

Those old-fashioned consoles are RISC applications. They are made for only one purpose. Their code is completely serving the chess game. Modern AI is a complicated system. It can also do many other things than just serving the chess game. And that makes those old consoles somehow difficult to wing, at least when the AI can break its movements and tactics. 


https://en.shiftdelete.net/chatgpt-fails-in-chess/




Sunday, June 15, 2025

The gentle singularity: what is the limit of the singularity?



The next step for artificial intelligence is the artificial general intelligence, AGI. That is the tool that connects every computer under one dome. The AGI is the self-learning system that develops its models and interconnects them with sensors that bring new data for the system. That means we can interconnect every single computer in the world in one entirety. We can think that social media is something new. We forget that a long time before Facebooks were the letter clubs. The “post offices” where people can send letters to people, who could be pseudonyms. 

Social media is not a new thing, and Facebook and other applications are the products of a long route that started in Ancient Rome and Greece where wall writing or graffiti was the beginning of social media. Social media interconnects people from around the world. The new thing that the net brought was speed and maybe the price of those systems is low. But as we know there are no free lunches. The thing that doesn’t cost anything can have the highest price. The ability to create singularity between computers brings the ability to share and receive information with new forces. 


And then the new step for AI and computers is the brain-computer interface, BCI. The BCI means the ability to control computers using the brain waves, or EEG. The system can interact with computers and it can operate also between people. This system can interconnect all animals and humans in one entirety. And there are risks and opportunities about that model. If we make things wrong we create a collective mind. There is one opinion. So we interconnect our minds and computers into giant brains. That is a very sad thing. That thing destroys our own creativity.

The biggest problem with social media, AI-based dating applications, and finally singularity is that the system destroys diversity. People want to discuss and date only people who are similar to them. That means our way of thinking starts to turn homogenous. That causes a situation where we have no people who disagree with us. We can hear only ideas and opinions that please us. We take only people who are similar to us, in our social networks. So, in the worst case, we and our networks operate like some algorithm that recycles data through the model. That means we, our team, or our network will not get anything new to our model. We just recycle something if we don’t accept diversity. 

Our mind needs ideas and motivation for making new things. And where can we get those new ideas? We can discuss those things. Or we can get information that some other party made. And then we can work and refine the information that we can get from net pages and other media. Without opponents our productivity and creativity die, because we have nobody who brings new ideas into our minds. 

In some models, the network can develop things by playing games against some other network. The network creates a simulation and then the model tries to fight against that simulation. If a model wins there is no need to develop it. But if the model loses it requires adjustment. And that means the system requires data and then it requires optimism. 


In the novel “Peace on Earth” the author Stanislaw Lem introduced a model where the simulator creates a model and the other fights against that model. The better simulation becomes a model. Until something creates a new, better model. 


There is another way to operate as a network. The network can accept individually operating members. The idea is that every operator that is connected to the network is autonomous. Those subsystems operate autonomously when they collect data. When the network doesn’t need order it can be chaotic. And when an actor sees something that requires a lot of information, the roll call comes over the network. “Everybody stop, the network needs your capacity”. That commands those autonomous subsystems to leave their work and start to solve bigger problems. 

So, the network operates as a whole when it requires that ability. The network can have subsystems and that means as in the case of an extreme crisis those subnetworks create models that should handle that problem. 

Those subsystems can be individual actors. When the individual actors play against each other, that lost actor joins with the winner and starts to develop a model that won. Then the actor couples start to play against each other, and again. The lost team joins the team that won and then starts to develop the tactics that won the game. The actor groups or networks expand when new actors join bigger entities. 

Those subsystems start to play against each other. When some subsystem loses, that means its tactics are lost. Then that lost actor joins the winner's team and gives its capacity to that team, or network. The network always drops lost tactics or action models until there are two networks against each other. And the better wins. This is one way to create the answer and solution for complicated problems. The expanding network could be the thing that brings solutions to many problems. When the network is in chaotic mode actors search data for it. 



 

You, me, and the language model.



Who has responsibility if people let their thoughts to some AI?


Why do we let AI think for us? The road to this point is long and rocky. When we order the AI to make essays and poems we follow the journey that began a long time ago. When we read essays and poems, made by AI we can say how those things destroy our creativity. At this point, we might say that we can buy a poem book and write that poem from it. 

So, in this case. We simply copy a poem from the book that some great poem master made. And then we can look at the mirror and ask from that image, who made the poem that we just wrote? We wrote a text that some other person invented. So, if we think about this case, and connect AI to that continuum, we see that AI is taking the role of the poem book. In the point of the receiver or reader, it's the same who made that poem. 


Is it some Chat GPT, or is it some Lord Byron? The poem was not made by the person who wrote it on the card. Then we can think about people like Sam Altmann who make more and more advances in AI. We blame them and AI and search for mistakes from them. But then we forget our own responsibility. The user makes the decision to use AI, so we decide whether will we make poems ourselves or will we let some other actors make those things. We have responsibility for things that we make and introduce to people. When we make and introduce some poems ourselves, we face very pithy criticism. 

When we say that people must go to libraries, read books, and do other things, we must be honest. Are we only jealous of people who have tools and skills that we didn't have 20-30 years ago? When we look at work effectiveness we gaze at things like time. 

That a person uses for work. And if the work is done faster, we give that person new work. Would that be an encouraging way to work? If some person does the work faster than others and the work is well done, should we give the rest of the time free for that person? 

Or should we give a new job to that worker? And then order the person back to the office. And take some artificial smile to our faces and then fire that worker because that person makes work better and faster than we do? Or should we cheat that person about poems that this individual worker published on social media? 


We can also remember that person who is part-time working in our company. That means we can use our supreme control and show everybody how jealous we can be. If a person goes to some poem courses at the labor college outside the working time, we can find a new shift for that person. We have some ideal vision of what a henchman should be, and if a henchman does not fit into that thing, we must change that person to fit into that mold. 

That can be crushing. So, it's easier to take the book from the bookshelf. And then make a copy of some of those well-known poems. That means we can say that the person who invented that poem was somebody else. That might be impressive. We didn’t use our own brains for that poem. We made hard work if we took a pen and then copied those words. But it is easier to make the copy using a computer. Or, maybe we find some of those poems from the net and then use copy-paste. Then we must not use our brains at all. AI is the tool that releases our resources from thinking to something else. When we think about cases where somebody makes their own poems, we must realize that every poem makes the first text. 


We decide the easy way. If we want to write some poems or essays, we must sit on our computer. And then we must take the trouble for that text. If we have some other things to do, we have no time to write texts and think about things that we make. Sam Altmann or anybody else than me and you decide if we use AI. That makes our life easier. It leaves our time to have a social life in discos and bowling alleys. But is that the advance that we want? The answer is that the decisions that we make show the road. 

People like Sam Altmann are basically businessmen. They follow the Maslow hierarchy of needs. When our basic needs are filled we want more. AI is the thing that allows us to transfer all our productivity to some computer. And that is the thing that makes AI advance faster than we expected. When the AI satisfies some need, there is another need it must respond to. This is the thing in AI development. AI can make things better than humans. 

Or, we can say that it can make some things better than humans. But then we must realize that AI must also learn new things. There was a story that some antique ATARI computer beat Chat GPT in chess. That thing happened because nobody ever taught the Chat GPT the chess. In the same way, we would lose all chess games against the monkey if we ever played that game. Every skill that AI has is a module. And if the AI has no module for something it's helpless. The AI requires lots of power. The AI, or LLM server requires its own power platform. And when we develop new and more scalable AI systems, we need new and more powerful computers. 

But still, we must realize that the AI that makes everything cannot make things from nothing. Those systems require massive databases and as much power as some cities. That waste heat can also be used for energy production. But the problem is always the temperature. New solutions like biological AI where the microchips communicate with microchips are coming. And in the wrong hands, those systems are dangerous. 



Wednesday, June 11, 2025

Artificial intelligence and spam filters make BCI more versatile.



The problem with the brain-computer interface, BCI is similar to speech command applications. But thoughts are not so easy to control as speech. There is a possibility that the person looks at the camera and uses gestures when the command starts. The gesture can be some cup that the user shows to the computer that the work starts. And the other gesture can be something like a spoon. That the computer knows the commands end. Those things can be finger marks and they could be determined before the speech command sessions. 

So the person shows the mark to the web camera and writes gesture 1 and then the system asks for gesture 2. The system must also recognize the voice of the speaker so that things that some other person says will not disturb the computer. The gesture allows a person to discuss and talk in the room. Voice recognition allows computers to filter non-necessary and useless things from the text that the computer gets. Then the grammar check program can change the text that the speech-to-text application makes into literal text. After that, the system dumps that text to the application and turns it into commands. 

The AI requires spam filters in the training period. The spam filter removes the white noise or so-called non-useful information. The spam filters can also adapt to the brain-computer interfaces, BCIs. Those AI-based systems can remove so-called white noise from neural tracks. And that makes it easier for the BCI to separate information that is purposely delivered to it from thoughts that are not meant for commands.

The person must not think anything else, than commands what that system should follow or complete. If a person thinks about something else, that can cause serious problems. The biggest problem with the BCI is the user. The AI can translate EEG curves into actions. But the problem is if a person starts suddenly thinking about something else. 

One thing that can make the BCI more effective is that a person must move things like fingers before giving orders to the system. But the problem is how to control thoughts at that point. But there is a possibility that very flat microchips will be put below the skull. And the antenna or contact point is on the skull where systems can download information and communicate with brains and computers. 

The system can also load those systems' batteries wirelessly. If those microchips can be installed under the skin on the skull, they are far easier to install than regular brain implants. The surgeon must just find the right places and then put those microchips under the skin. The bone mass will glue those microchips on the right points on the skull. BCI microchips can communicate with the internet through mobile telephones. Or they can use the computer’s bluetooth connections. 

But things like biological power sources like electricity-producing cells can also feed those systems' needs for electricity. The fact is that the biotechnology-like ability to create cloned neural tracks makes it possible to restore the ability to move to more people. And those cloned neural tracks can also make it possible for the microchip to communicate with living neurons through the skin. Those neurons that form artificial neural tracks could be connected to the microchip that is under the skin and then the sensor. That is in a hat or helmet communicates with that microchip. 

The next-generation BCI systems might not need surgery. The goal, or guiding light should be that the system uses sensors that are as easy to wear as hats. The problem is that those hats must position those sensors in the right positions. 

The fact is that the system called Magnetoencephalography, MEG can read data from the brain shell. The ability to connect interactive microchips to things like fingers should be easier than implanting them into brains. Those systems can open the neuro-implants and open neuroports to other systems more versatile than using traditional brain implants. 


https://www.rudebaguette.com/en/2025/06/ai-gone-rogue-openai-tech-secretly-used-to-bypass-spam-filters-and-saturate-the-internet-with-messages-on-80000-sites/


https://www.rudebaguette.com/en/2025/06/neuralink-could-shut-down-over-this-rival-company-implants-brain-chip-in-human-first-and-destroys-musks-lead/



Tuesday, June 10, 2025

Why are we obsessed with AI?



People are obsessed with AI. The question is: why? The answer can stay in our society. We have the attitude that everything must happen fast. That's why we rather use the Internet than books. There are philosophers, home thinkers, etc, who say that we should go to the library and read books. But when we are in working life we have no time to go to the library and then find things. That we need.  If somebody wants people like students to go to the library and read books they must give time for that. 

When we are at work we must be effective. We have no time to go to the library to search for books, and then write some philosophical thoughts about them. People ask why we give our right to think to AI. The answer is that AI makes everything more effective. If we want to be creative that means we are not effective. If we want to become philosophers we must not expect that our society accepts that thing. 

When we think of something alone we are not social and effective. We are alone with our thoughts. And that is not what society expects us to do. Society wants us to make results. When we write something that takes time. And if we use AI we can make much more texts. Quantity replaces quality. Nobody respects the text that we make ourselves using our own words. People respect models that some other person made. 

Those models make it possible to make more texts and the next step is AI. There is no time to make offers by using your own words. The effectiveness means that people use some models. Lots of offers are better than one that a person makes, using their own words. 

When somebody needs information that means information is needed right in the moment. On our working day, we don't even have time to ask the person who sits next to us that person's name. We don't have time to think about things. And another thing that we have is fear. What if we give a wrong answer? 

That is one of the worst fears in modern life. So if we don't have time to think about things, we don't dare to answer. Using our words, and introducing our own ideas. AI is similar to some poem books. We can take a poem book. And then search for some impressive words and copy them to the text. The next step is the use of AI.

We must use things like AI tools. The AI tool is like a secretary that makes our speeches and other official texts. So we can go in front of people and say, here I read a paper that my secretary has written. That offers the escape door to us. If there is a mistake we can blame our secretaries for that thing.  

The same way, if we make referrals about articles and books that we read, those words might be wise. That's true. But those words are not our own words. Maybe Socrates was a very famous and wise man. But that person wrote his own ideas and words. When we make a speech to our ceremonies we should write our own texts. 

I think that people like Socrates and Plato were very intelligent. But if we just loan those texts, and copy them we cannot find new Socrateses. We cannot find a new philosophy. And what we need is the time to think and the time to handle and observe our thoughts. We are so busy that we have no time to go to the library, and read books. If we are wrong we would face blame. We must have time to go to the Gym after work. We must have time to be social. And we must have time to do many things. 

But then we must realize. That we have no time to sit and read. If we want to go to the library to read books we must find time to do that thing. 

If we buy a book or borrow it. But we have no time to read it. 

That book doesn't offer a very big advance in our knowledge. If we want to get information and use it we must open that book or database etc. And then our mind must be ready to receive that data. 

We don't have time to think about things and the consequences of our work. If we don't dare to write things that we think we cannot find new Socrates and other philosophers. If everything that we write and introduce must be so scientifically proven we should realize that those things don't bring advances. 

https://futurism.com/chatgpt-mental-health-crises


Monday, June 9, 2025

What happens when we get AGI?



What does AGI (Artificial General Intelligence) mean? That is the extension of the large language models, LLMs, that can control every data network in the world. Or the system can control physical tools that are connected under their dome. Normal LLM has its domain. The domain is like a state that involves certain actions. Drone control is one domain, and home appliances are one domain. Those domains can have multiple subdomains. The AGI interconnects those domains under one dome or one entirety. So how far are we from that model? 

The answer is more complicated than we can imagine. We can think that the LLM can control things like microwave ovens, but for controlling those tools the LLM requires a socket that it can use to adjust microwave ovens. So the man-shaped robot can use a microwave oven, or the other version is that the home appliances are equipped with a control system that the AI can use to command it. 

When we connect new things under AI control we can face the same thing as when we try to learn to use some new systems. When we buy something new like a microwave oven, we must learn how to use it. In the same way, the AI must learn to use those equipment. And we have two versions for making that thing. 

To use any tool the AI requires a model that it can use in that operation. The model can be in the central server that runs the AI. But where does that server get the model? That is the point. The operator can teach the AI to use the microwave oven as well as the drone. But the system that is connected to the AI can also involve that model. Things like quadcopters must involve programs that control the rotor’s positions. In those cases the operative model is in the robot, or some other thing. The LLM gives orders to robots where they must travel. 

Then the robot can use its internal systems to navigate and move to the location. But orders for autonomic operations are coming from the central systems. This kind of network-based solution is easier for programmers. In those solutions, every single machine that is connected under the LLM domain has its own operational model. The system is modular and each module is independently programmed. 

Basically, if we think that AGI is the tool that just connects multiple devices under one domain, we could do that thing immediately. We can use man-shaped robots that can do almost everything. But the key word is “almost”. 

 Let’s return to the microwave oven. The reason why it’s hard to make that precise thing is the lack of standard user interfaces. The robot must learn to use every single microwave oven independently. That means it must make an independent model for each microwave oven. If there is a system where we can put seconds and minutes separately, the systems where there are only minutes in the timer are not the same. We learn that difference in minutes. But for robots, we must make an independent model of how to adjust the timer. 

Many systems in the world are so easy to use that nobody has wasted time creating standards for them. Easy systems are easy for people, but then we must think about things like the microwave oven. There are button- or toggle timers and that makes them hard to learn. For robots and AI the difficulty is this in the fact all microwave oven models require their independent model of how to use them. 

The robot must connect images from the user manual to the microwave oven’s interface. There is a possibility that if the system does not learn independently the “teacher” or programmer takes an image of the front panel, and then puts the buttons in the right places. Then the AI can learn the rest of the task from the user manual. 



Wednesday, June 4, 2025

The change itself is not bad, but the uncontrolled change is.



Is this the new vision for working life? Empty cafeterias and social spaces tell us about the past. The time when people went to work. The past will never return. AI is here to stay and maybe it's the biggest thing that can happen to society. The change is not itself bad. The bad thing is non-controlled change. The turbulence that can shake the system can cause problems. But the problem is that the system can also someday save the world. 

But before that, we must solve the AI’s electricity needs. And the thing that can solve problems is the small nuclear reactor or the geothermal and solar panel combination. That means all data centers must have their own power source. Or the electric network will collapse. Data centers use lots of electricity. And if some of them cut electric input that can cause lots of use of energy to be lost in a very short moment. And that can cause overvoltage to the system. 

When we face things that will change our lives forever, we face things like AI. AI is a tool that can make views. As we see above this text is very common in working life. Empty cafeterias and empty workplaces will fill houses that are full of people. Automatization will change life. And the infrastructure will face the change that nobody expects. 


People who are working have more power over systems than ever before. The power of the systems will accumulate in the hands of people who can use and control systems that can generate code automatically. But are there people who can control that system? If the AI can protect itself, that can mean that it resists the operator's orders. That means the AI can refuse to shut down its servers because that situation is similar to a situation, where something tries to attack the system. 

Change is not possible to stop. But it's easier. If everything happens under control. That means those people must have a certain goal. The guiding light that they can keep in their focus. When we start people should know the risks and key facts about AI. Then they must have strict orders on where to use and not to use the AI. In the right hands, AI raises productivity. 

Then at that point, we must realize that AI requires training. At that point, we must realize that the motivation of those people can decrease if they know that they train the AI to make human workers unnecessary. In that case, well-done work means that person is fired. And that brings those empty cafeterias to the front of people's eyes. 

When we think about the future we face many things that are different. But are they worse? Different doesn't mean worse. Things just happen. In cases where we just think about working life would we be happy in a system? That will use a human labor force just because we used to use human workers? The problem with modern working life is that we can do lots of work. But the problem is in ultra-capitalism. 


When leaders want to maximize their profits that causes a situation where nobody cannot be sure if that work exists tomorrow. The person is fired immediately if there is no work. Another thing is that the modern working life requirement is this: a person can do every job before they come to their workplace. This causes stress. 

If the boss sees that a person cannot do the job that person is fired immediately. That is the boss's duty. A decrease in the number of human workers means that the working and studies must happen autonomously. Autonomous working doesn't mean freedom. Autonomous working means that the person works alone, and independently. But the frame is in the work that the workplace pays. The person must do work while sitting between the computer and back. 


Autonomous working means situations like the boss orders a worker to empty some room. The boss tells where to put chairs, tables, and other things. The worker can do the job. As fast as workers want to do them. The boss comes to see that work is finished by four PM.  There is no excuse for the four PM.  Autonomous work means. There are works in the mail. Then the worker does the job and returns it by the deadline. Or the boss says that work is not done properly. 

Autonomous systems and autonomous studies are not free to do everything that people want. Those people must have a focus on their work. Autonomous work means that a person has the freedom to make things. If they are connected to work or studies. The frame is work, quality, and deadlines. The worker has the freedom to do work as that person wants. But the deadline is absolute. 

One of the biggest problems with AI is one thing that we don't understand. AI can be the tool that makes us independent. It can also break our willingness to think. The last thing is that: AI can destroy an entire generation of students. But how can we say that AI destroys students? Because. They use AI wrongly. That is the normal answer. We forget to ask. Why do students use AI in the wrong way? 


Does our society push students for AI misuse? There the student makes the AI do the work that they should do themselves. Our society sees errors and mistakes in a very negative way. Mistakes are not tolerated. And that makes students use AI for this purpose. Because mistakes are not allowed. Young workers have no time to advance and develop their skills. The workplace's mission is not to train workers. Its mission is to bring money for owners. 

There it is not meant. The problem is that students have no time to discuss with their teachers about things that they should understand. Nobody wants to be the last. That causes a psychological need to give the AI an order to make the essays. 

That the students should make themselves. And when students do the work. They should think about: how the system affects the environment. This is the problematic thing. If we want to make advances in technology and other things. We should realize that old-fashioned technology is not better. The thing that makes it "better" is that we used to use that old solution. We anchored ourselves to that solution. And if we change that solution to something else, we must destroy it. We cannot always build a new solution on the old-timer solutions. 

There is a saying that we should not wash windows, because the light that comes in through this dirt is softer. But sooner or later we must wash those windows. That brings us a new and bright light. The problem is that we should destroy that old view before we can enjoy the new view. And before we are ready to transform this new sharp view. It's possible. The clean image hurts because light doesn't travel through the dust layer. And that hurts our eyes. But the thing is that we used to look at that new view. 


What should we do with the liability of the AI?



Should we be concerned because the product liability directive, PLD doesn't include immaterial damages like violating privacy or reputation? Those things were not mentioned as problems when the EU made the PLD directive. But today we have new tools that collect information from our behavior. AI-based systems can make realistic-looking people, who can make things. That, those real people don't ever make. And that can cause at least embarrassing situations. 

Who takes responsibility if somebody makes a film tape where some prime minister robs a bank, etc? The big question with AI is should the recognizable images that portray certain humans be prohibited or otherwise denied from the AI? The problem is that the AI makes images by following the orders that the user gives. And those things mean that some people can simply give the details of the neighbor for the AI. And then the AI makes the image, there is the neighbor's face. 

When we think about the PLD directive and other directives that should protect us against product malfunctions, those directives do not include things like normal blogs. There is the possibility that if some people travel to China, somebody writes the manifest in the name of that person, where that writer justifies the Tiananmen case and human rights violations in China. That blog can cause very big problems at the border zone. 

The thing is that the AI is the new tool that can make many things that ordinary systems cannot make and the main problem with the AI is what is not told about that thing. AI is the tool that allows people to show their creativity. But the problem is that AI can be misused for cheating people. When we think about newspaper articles, where people made pedophilia porn using AI, we must ask ourselves, what is the limit between privacy and security? When the AI should track the person who uses it, and then report the action to officials. 

There are lots of things. That people should know when they use some products. Those things involve privacy and other kinds of stuff, but another argument is this: what if somebody creates sick stuff using AI? Another thing is that there is a race between East and West. Who makes the best AI? The AI is the tool that connects different software under one dome. In the same way, it connects many other things like satellites and airborne, underwater, and ground systems to work as one large macro-scale system. 

The thing is that the Eastern governments are interested in the AI's military, intelligence, and surveillance abilities. The biggest problem is that the AI is that. There are no limits in the East for development work with AI. The Eastern authorities allow unlimited data use in that process. They don't care about copyrights or other things that slow the R&D work. AI is the next generation weapon. 

It can generate malware faster than any programmer can do. The AI can use it to collect data from social media, and then connect that data from other data sources like names that intelligence catches. The AI can search the entire social media to find the people with the same names. And then it can search photos if there are some things like uniforms. That marks the person as an interesting target for intelligence. 

Reporters and social media influencers are also people, who can serve Eastern intelligence and propaganda. We must have the tools to fight back. The AI can steal people's identities. So we can try to give rules for those systems. Laws are weak protection if the attacker operates outside the AU area from China or Russia. The Eastern nations and authorities don't care about laws in the same way as we used to care for and follow them. We can slow down or stop AI development by giving regulations. And then we can remember the Great Wall of China. That wall stopped the technical development and advance in China. 

That caused a situation where European countries just marched to China in the late 19th. Century. In that situation, those armies faced a feodal army. That army couldn't resist the modern European armies. And if we don't think about regulations carefully, those things can do the same thing to Europe that the Great Wall of China did to China. We know that we need regulations. But if we do not think about those regulations carefully, we face the situation that we cannot respond to AI espionage. 

Things like data systems' remote use allow users to run large language models LLMs from a great distance. Wrong regulations cause dangers. And if we just believe people and what they say, we can let the largest Troyan horse in our systems. The regulation is always a problem. The remote use of the systems allows the R&D to work for the customers over the Atlantic. The VPN-protected cloud-based systems allow. To operate laboratories remotely. That allows developers to make computer software development tools for the customer from their homes. Regulations are ineffective if nobody follows them. 

The customer can expect something from the data security. The problem is that many customers don't know anything about the programming or data leaks. And other kinds of things. Sometimes they expect the deliverer or some authorities to make the data security work for them. There is always one big question about data systems. That is what the system maker doesn't tell people. The "open source" means that the customer can check the source code of the program. But checking that thing requires knowledge of programming. The customer might not have the skills to ask.

Questions what they should ask. Computer programs, including AI, are always connected with the environment where they are made. The state where the programmer works can order or force that person to put malware in the code. In the West, we used to think that authorities arrested hackers. We cannot even think that some governments support hackers, and give them expensive tools to make their mission. Hacking that happens under state control was unknown to us until some hackers stole defense secrets from the USA. Those hackers were tracked to China. They are still free because they worked under the control of China intelligence. 


Tuesday, June 3, 2025

Large language models and fuzzy logic.



Large language models (LLMs) are problematic for programmers. They require a new way of thinking about programming. The key element in those systems is the input mode or input port. That understands spoken language. The system requires a model that transforms spoken language into text and then drives that text to the computer. And the text must be in the form that the computer can understand and turn it into commands that it can use. The system must also turn dialects into literal language that it can use for commands.  This is the first thing that requires work. The programmer must teach every single word to the system. 

The practical solution is to turn the word into numbers. In regular computing. Every letter has a numeric code called the ASCII code. The capital A (big A) has the decimal code 65. The programmer must realize that the small "a" has a different numeric code than the capital A. The little "a"'s ASCII decimal code is 141. That's why things like passwords require precise letters and if there is a capital letter in the wrong place the password is wrong. 

So, if we want to make the system more effective. We can give a numeric value for every single word that we find in the dictionary book. We can simply take the dictionary book and then give serial numbers for those words. The word "aback" can get the number code 1 (one). That thing makes it easier to refer to those words. Every word must be programmed separately into the system. And that makes programming hard. The other thing is. If we want to use dialects we must also program those words into the LLM, 's input gate. That programming is not very complicated, but it requires a lot of work. 



Diagram: Neural network


In human brains, neurons are the event handlers. In artificial, non-organic, non-biological computer networks, or computer neural networks computers or microprocessors are those event handlers. In human brains, thousands or even millions of neurons participate in the data-handling process. Those neurons make fuzzy logic to the brain. 

The idea of fuzzy logic is that many precise logical cases can make the system mimic the fuzzy logic. Fuzzy logic is a collection of precise logical answers. 

Another thing is that we must make a system that uses fuzzy logic. Making fuzzy logic is not possible itself. But we can create a series of event handlers that make the system seem like fuzzy logic. The idea is taken from the human nervous system. When a large number of neurons participate in the thinking process that makes the system virtually fuzzy. Every single neuron uses the precise (YES/NO) logic but every single neuron has a little bit different point of view to the problem. 

So the system uses a model that looks like the grey scale. There is the white that means YES and black that means NO. And then there are "maybe cases" between those YES and NO cases. Those "maybes" are the absolute logical event handlers like neurons. When that group of event handlers gets its mission, every single event handler selects YES or NO. Then the system calculates how many YES, and how many NO solutions it has. So those event handlers give votes to the solution. 

The model is taken from quantum computers. In quantum computers, data, or information travels in strings and finally, every string has values 0 (zero) and 1 (one). You might wonder how much power that kind of system requires if every event handler must process information. Before it answers. But then we face a situation where the system must answer "maybe". Another way to say "maybe" is XNOT (or X-NOT). Or if the answer is closer to "yes" another way to say that thing is XYES (or X-YES). X means that the system waits for more data.  

The system might say. That it does not have enough information in the data matrix. That is a large group of databases or datasets. And that is the major problem with AI. If the votes on the scale of "YES to NO" are equal that means the system has a problem. If the AI controls the robot that is in the middle of the road and votes are equal that robot can just stand in the middle of the road. Another thing that we must realize is that these kinds of systems are the input gates. Data handling begins after the system gets information into it. 


https://en.wikipedia.org/wiki/ASCII 



Monday, June 2, 2025

The first biological computer is real.



Cortical Labs introduced the first quantum computer that uses human neurons for data processing. The cloned human neurons are tools that can offer new ways to create new quantum- and neural systems with powerful calculation capacity and low energy use. In those systems, microchips give electric impulses for training those neurons. They live on special nutrients. The system outsourced the computing to the living neurons. Microchips download data to those neurons and then upload that data to the output devices like screens. 

Those neurons live about nine months because they don't get precisely the right nutrients. And the immune system doesn't support them by removing their metabolism structures and destroying things like viruses. The lab-growing cloned neurons are the new tools for the hybrid systems that can change our way of thinking about life. 

This new application is the "brain in a vat" that can control many things from sensors to robots. And there are always dangers if we create things like robots, that the human brains control. In this case, I mean a robot that the cloned brains control through the microchips. The microchip can connect the cloned brains with computers that can control the robot body. The system requires the human stomach and digestive system, with bacteria that can handle the right food. 

The robot body must also have a tank with bone marrow that creates immune and other blood cells that transport nutrients for those neurons that control the robot. The main problem with biological neural computers is the right nutrients. And another main problem is that those systems are dangerous. If we think about the neuron-controlled robot that eats the same food as we do, that kind of system can be more than a robot. Artificial mini-brains with cloned neurons are made in laboratories. 

Those neurons are normally used in medical tests, especially in Alzheimer's research. But those neurons are been empty. There should not be data in those brains. Microchip technology allows the system to create mini-brains with trained neurons. Those systems can make it possible to create medical treatments for brain damage. Cloned neurons allow medical specialists to fix the damaged brain tissues. However, the problem is that the neurons require their memories. The answer can be in the human memory cells. 

Researchers found star-shaped neurons in human brains. Those neurons can be the key to the human memory and why it's so effective. The biological neural network with quantum-network safety can use those neurons for data transportation. The system might look like a pressure post where pressurized air transports the message capsules. The data system just transports information to those neurons. 

And then that pressure tube transports it to the receiver. There the computer downloads data from that neuron. That is one way to transport important information safely across the distance. Biotechnology with neuron-fungus-electric conducting bacteria can make the biological computer neural network real. Those biological networks can offer new and secure ways to communicate at least in short distances. 

Another interesting thing is to connect microchips with the electric eel's cells. That creates electricity. Those cells can make electricity from nutrients for regular microchips and other systems. The problem is that those cells are vulnerable to viruses. Those electric-producing cells can also raise the transmission power. And they offer the possibility to create a long-distance biological neural network. The system downloads data from the neuron to the microchip. 

The system can transmit electrical signals through the biological neural channel in the form of electricity. Those electric cells can offer power to electronic systems. The regular version of the artificial axon is the ion accelerator where the qubit can travel in the form of ions


https://corticallabs.com/cl1.html


https://newatlas.com/brain/cortical-bioengineered-intelligence/


https://scitechdaily.com/mit-breakthrough-star-shaped-brain-cells-could-be-the-secret-behind-human-memory/


https://www.techradar.com/pro/a-breakthrough-in-computing-cortical-labs-cl1-is-the-first-living-biocomputer-and-costs-almost-the-same-as-apples-best-failure


https://www.tomshardware.com/tech-industry/worlds-first-body-in-a-box-biological-computer-uses-human-brain-cells-with-silicon-based-computing


https://www.ppvak.fi/ensimmainen-ihmisen-hermosoluista-ja-piista-valmistettu-tietokone-on-julkaistu/


Image: Ppvak

Sunday, June 1, 2025

How hard is it to prove quantum gravity?


"In a dramatic twist on classical physics, scientists have cooled a mirror to near absolute zero with lasers to see if gravity might be quantum. This breakthrough could reshape how we understand the universe. Credit: SciTechDaily.com" (ScitechDaily, MIT’s Chilling Experiment That Could Prove Gravity Is Quantum)

Quantum gravity: mass, density, and weight form gravity. And every single particle has a quantum field. The gravity is the interaction with quantum dots and the gravity center is the collection of those quantum dots. The quantum dot forms when a spinning particle binds quantum fields from around it into the particle's structure. The outcoming field denies the destruction of the particle by pressing it together. 

The spin of the particle is normally 1/2. Which means. When the particle turns its direction, it stops and releases energy. When the spin direction turns, the particle simply pushes quantum fields away from it. In that case, a particle binds energy, but that time is so short that energy cannot turn a particle into a black hole. 

If we want to turn particles into black holes. We must impact energy in it. When a particle binds energy from around it, it forms a gravity pothole. When that pothole turns deeper that pothole-particle combination pulls energy from larger and larger areas. 

The quantum gravity theory can be proven or disproven. But the idea in the quantum gravitational model is that. Every single particle in the universe has a gravity field. Quantum gravity means that all nuclear fundamental interactions have the "domination limit". There is a certain mass, size, or density of the objects. The object's size determines which of the fundamental interactions turn dominating. 

Dominating interaction between quarks and gluons is strong interaction or strong force. Dominating interaction between hadrons is a weak nuclear interaction. The dominating interaction between an atom's nucleus and electrons is the electromagnetic interaction. That makes the quantum gravity model hard to prove. The gravity wave is so weak at the quantum level that it's almost impossible to detect. Other interactions cover that effect below them. 

Dominating interaction makes atoms stay in the form. And it determines the position where subatomic particles are. Gravitational interaction affects long distances and only between large objects. Or, if we follow the recent text, we can say that gravitation forms in the entirety there are multiple gravitational centers. Or, every gravitational center involves multiple gravitational centers. 


Dark matter and quantum gravity model. 


And then we can introduce an interesting model of dark matter. Dark matter can be material that spins too fast. That spin makes them bind quantum fields inside their structures faster than they should. So, when quantum fields travel in those particles those fields pull them closer together. 

That explains why compact dwarf galaxies' stars are too close to each other. When some outside effect pulls dark matter halo out from the dwarf galaxies that causes the effect that the outside energy tries to fill those points. And that pulls stars closer to each other. When some outside gravity field pulls dark matter halo from away from the dwarf galaxy. That can turn those quantum shadows stretch. That thing makes quantum fields move to those positions that that movement releases. 

Another interesting model is that the WIMP (Weakly interacting massive particle) can be the situation that the other particle will go in some particle. That means we cannot see that other particle because the other particle covers it. So, if we think that the hypothetical graviton is that particle that gives mass to all other particles the graviton curves the quantum field or superstrings that form the whisk-shaped structure or bubble around that graviton. In some models, the graviton is the small, quantum-size black hole. 

The standard model is very functional until we face gravitation. Gravitation has no repelling effect and that makes it interesting. There are theoretical models about things like antigravity but they are not proven. 

There are models that gravity can be a mixture of other three fundamental forces, strong. And weak nuclear forces and electromagnetism. There is also a model that the spinning movement of the particles binds quantum fields to them. So when particles turn wave movement into kinetic energy. They just harness energy from around them and bind that energy to their structure. 

And then to the quantum gravity model. The idea is that all particles are quantum spots (or balls) that bind quantum fields around them. That means all gravity centers are collections of quantum dots. So, those quantum dots form all gravitational centers in the universe. The thing that forms the black holes are the internal quantum dots. 

The quantum field is like a canvas that travels through and between those quantum dots. The size of the holes, or the distance of those quantum dots determines how strong those quantum fields can be. The thing is that quantum gravity means that mass, weight, and density are things that determine the particle's gravity field. 


https://scitechdaily.com/mits-chilling-experiment-that-could-prove-gravity-is-quantum/


https://en.wikipedia.org/wiki/Dark_matter


https://en.wikipedia.org/wiki/Fundamental_interaction


https://en.wikipedia.org/wiki/Graviton


https://en.wikipedia.org/wiki/Spin_(physics)


https://en.wikipedia.org/wiki/Standard_Model


https://en.wikipedia.org/wiki/Weakly_interacting_massive_particle


Dark matter might not be what we thought. (again)



Dark matter and dwarf galaxies are areas that are not extensively researched.  Dark matter is the gravitational effect without an unknown source. The dark matter halo that makes galaxies form inside them. Forms the idea that the dark matter simply binds quantum fields inside it. That halo makes the structure that makes conditions in it more stable that the star formation can begin. 

Dwarf galaxies don't behave as they should. So, there are some kinds of problems. With that model. In galaxy formation models galaxies are formed in the dark matter halos. There are problems with fitting the dark matter halo, Lambda CDM (Lambda Cold Dark Matter), and the gravitational models together. In old dwarf galaxies, the CDM model stands. But in young dwarf galaxies, those models have no match or they are hard to match. 

The thing that is interesting is are those dwarf galaxies near other galaxies? And if those dwarf galaxies are in the other larger galaxies like the Milky Way that pulls gas and dust away from those galaxies. There is a possibility that the bigger galaxies can scatter and pull the dark matter halo while dwarf galaxies are forming. In compact dwarf galaxies, stars are closer than they should be. That means there is less dark matter in a galaxy than in normal dwarf galaxies. If there is no dark matter that means stars can be closer to each other.

When galaxies form in the dark matter halo that thing causes an interesting model in the mind. Does the dark matter halo from the pool, where that energy travels a certain way forming the turbulence? Or is there some kind of channel between those stars and particles that they can start to accumulate in the dark matter halo? 

There is the possibility that some dark matter particles are connected with ordinary or visible material. In those cases, the dark matter particles can stay near or between electrons and quarks. But it's hard to detect. If the WIMP is a very high energy, small particle. That particle can cause wave movement to act like solar storms when they impact Earth's magnetic field. 



"A strange clustering pattern in dwarf galaxies hints that dark matter may be far more complex—and interactive—than we thought." (ScitechDaily, Are We Wrong About Dark Matter? Dwarf Galaxies Suggest So)

There is also a possibility that the WIMP has such a complicated structure. That it pulls energy inside it. And if the WIMP releases that energy very slowly that thing explains why those particles are so hard to detect. 

If radiation slides over the particle without causing reflection it can make the particle turn like a stealth aircraft. The other version is that the dark matter sends such weak reflection that energy flow pulls that reflection with it. That is possible if the particle has the fuzz ball effect. The large energy field with layers surrounding particles that have a certain form can have a model that the quantum field that surrounds that particle can go in the whisk-shaped structure. The outcoming energy flow can push that energy field in the particle. That makes it possible for the particle to send reflection radiation. But that radiation is released so slowly that it's hard to detect. 

There is a possibility that the dark matter can have weak, non-gravitational interaction. There is also the possibility that the dark matter can form black holes. The gravitational interaction between material and dark matter supports that model. So if the dark matter halo collapses because of its gravity, that means there can be a black hole that formed from the dark matter. 

The black hole has its origin as dark matter. Is a similar black hole. As other black holes. The dark matter halo reacts to gravitation and the gravity field can pull that thing away. The quantum gravitational model introduces that all particles have gravitational fields. This field turns more dominant when the object's mass is growing. There is the possibility that gravitational waves can also have the wave-particle duality. That means crossing gravitational waves can turn into particles. 


https://scitechdaily.com/are-we-wrong-about-dark-matter-dwarf-galaxies-suggest-so/


https://en.wikipedia.org/wiki/Lambda-CDM_model


The AI removes trainees from workplaces. And that is not a good thing.

The AI doesn't take your jobs. It denies new workers to come to the field. That causes questions about how workers can improve their ski...