Tuesday, January 30, 2024

Neuralink reports on the first human neuro-implant assembly.



Elon Musk said that the first human got Neuralink's bio-implanted microchip. That thing makes the person able to move the prosthesis wirelessly. The thing is that the Neuralink is not the top level of the neuro-implated microchips. And researchers can install some more advanced microchips on the skull without the need for a special neurosurgeon. 

Those Neuralink's neuroport-type systems can make it possible for machines they communicate with people without borders.  Also, the neuro-implated microchips can make things like technical telepathy possible, when people exchange their thoughts using brain-implated microchips. Those microchips allow to control of robots and animals using those microchips. The neuro-implated microchip makes it possible. People can fusion their senses with other people or animals. That causes visions where hackers can attack that kind of system. And that can cause a very bad situation. 


The brain-implanted microchips can used as BCI (Brain Computer Interface). The neuro-implanted microchips can connect a person to the internet using mobile devices or WLAN stations. And that kind of thing makes it possible to create systems where people can click themselves into cyberspace or the internet when they want. 

Because the microchip stimulates the brain straight. That means a person cannot separate reality and the virtual reality. Electric impulses will sent straight into the sensory lobes in the brain, the user of this kind of system doesn't see any difference between reality and virtual reality. And that is one of the biggest problems with this kind of system. 


In some visions, the ultimate augmented reality system can make multiple internal workspaces. And the user BCI system cannot find out from the virtual reality. When BCI technology becomes more common, some people may start to use those things as systems that allow them to get pure experience from games and other things. We know that some people have money. And they can get BCI-neuro-implants if they want. 

The brain-implanted microchips are an ultimate opportunity. But they also are tools that can used in perfect mind control. The system can project virtual experiences and virtual memories into the user's brain. The BCI systems can also make it possible for computers. That they can read all the thoughts that a person has.  

In some futuristic dystopia visions, those microchips can also give electric shocks to the center of pain in the brain. That means that BCI system misuse can make it possible for the controller can dominate the person. 


https://endtimeheadlines.org/2021/05/the-race-to-put-a-microchip-in-your-brain/

https://www.theguardian.com/technology/2024/jan/29/elon-musk-neuralink-first-human-brain-chip-implant

https://www.reuters.com/technology/neuralink-implants-brain-chip-first-human-musk-says-2024-01-29/


https://learningmachines9.wordpress.com/2024/01/31/neuralink-reports-on-the-first-human-neuro-implant-assembly/

Monday, January 29, 2024

Gravitation and space.



The main question about gravity is how or where the quantum overpressure that pushes particles forms. In some models, the whisk-looking structure that forms particles would connect the gravitational radiation between them.

That forms the gravitational electric arcs or very small virtual particles in the main particle. And in that case, the energy that reflects from those virtual particles that could be gravitons push particles forward. 

Gravitation and superstrings: superstrings act like rockets that move energy to another side of a particle from the direction of the gravitational center, 


Gravitation will affect space. And pull quantum fields in the gravitational center. But can gravitation also affect individual particles? We can think that gravity is a complicated interaction. Gravity is the force that affects the wrong way. 

When gravitational radiation hits particles they form an electromagnetic shadow or lower energy area at the front of the object, if we think that "front" is the direction of the gravitational center or travel direction. There is a possibility gravitational waves also consist of superstrings. Those are extremely thin waves. 

Those superstrings would be small and thin wormholes. When a gravitational wave superstring hits a particle its front side would be at a higher energy level than the side behind the particle. And that means energy travels to the lower energy side. That lower energy side would be the electromagnetic shadow behind the particle. 

When a superstring travels through a particle energy flows on it. And that superstring acts like a rocket engine that pushes particles forward to the gravitational center. The electromagnetic shadow pulls particles back. And the superstring drives them forward. 

But then there is another model. In that model reflection forms a standing wave that denies the wave movement that comes from gravitational center impact with a particle. That causes a quantum freeze or electromagnetic shadow at the front of the particle. And backward-coming radiation or wave movement pushes particles forward. 


The quantum vacuum that forms between a standing gravitational wave and a particle causes an effect that quantum fields that travel into it push the particle forward. 


In that model gravitation is reflection. When gravitational radiation or gravitational waves impact particles they reflect. That causes an impact between wave movements with the same wavelength. Those waves form a standing wave until another wave movement pack must give up. Because wave movement cannot reach the front side of the particle it causes energy flow from backward to the front of the particle. In that model, the back-coming wave movement pushes the particle forward. 


If an observer wants to measure an object's speed in the quantum system the observer must be outside that system.


The gravitational interaction with space explains why the speed of light can cross inside black holes. But if we want to see that thing, we must stand out of the black hole. 

When we are in falling quantum fields that pull objects with them like rivers, we will see that the speed is zero. Same way as when a river takes a person with it, we can say that our speed, compared to the flowing water's speed is zero. Observers must stand at the river bank to measure our speed. 

Gravity pulls quantum fields inside the black holes. And the speed of light is relative to those quantum fields. So the speed of light is always the same. If we sit in a craft that travels in falling quantum fields we would not see any changes, because we are inside the system. 

When those quantum fields' speed increases the speed of light compared to the speed of those fields is the same. If we want to measure the changes in the speed of light, we must stay outside the system. In that case, we can measure the speed of light, and the speed of the quantum field. 


Space is the quantum fields that form "space" in the universe. 


Can we see the wormholes? The question is are the mysterious wormholes in the relativistic jets that are leaving from black holes? The model of the wormhole is the gravitational tornado. There is a possibility that a relativistic jet can transfer so much energy in it that gravitational waves can form a spiral structure. And if that is right the relativistic jets are the place where gravitational wormholes exist. 

Relativistic jets are the highest energy places in the universe. The energy level inside them is so high that particles will not age in that high-energy plasma beam. That means the relativistic jets are electromagnetic wormholes, but is their energy level so high, that it can close gravitational spiral, or gravitational tornado in it? 

A gravitational tornado explains why black holes are not expanding. The idea is that the black hole expands until the gravitational tornado that forms its rotational axle breaks itself through the event horizon. The idea in this model is that spiral gravitational field turns so dense, that gravitational waves cannot affect inside it. 


Space or quantum fields around the gravitational center acts like air around the tornado. The gravitational tornado just moves quantum fields sideways. And that is its effect on space. When quantum fields move sideways other quantum fields must fall to that area. Called gravitational center. 

And those quantum fields feed that quantum tornado. The gravitational radiation is like reflection radiation from that tornado, that forms swirling superstrings. The time dilation means denser energy or quantum fields around that quantum tornado. 

The model goes like this: there is a gravitational tornado in the black holes and all gravitational centers. That tornado or gravitational whirl transfers quantum fields or space sideways and when a gravitational tornado moves quantum fields sideways, another quantum field must fall to that tornado.

The thing. That makes gravity interesting, and special it just pulls objects. Sometimes I wrote that gravity could be the force that affects space. The space means the quantum fields around the object. And those quantum fields will travel to the gravitational center. 

If there is some kind of gravitational tornado that guides that radiation or quantum fields sideways to quantum fields that travel into the gravitational center, we can say that the gravitational center like a black hole just moves the quantum field into another direction. And that causes an effect where the quantum fields pull objects with them. 


https://en.wikipedia.org/wiki/Wormhole

Sunday, January 28, 2024

The AI is an excellent tool for cyber- and propaganda operations.


"A study predicts that bad actors will use AI daily by mid-2024 to spread toxic content into mainstream online communities, potentially impacting elections. Credit: SciTechDaily.com" (ScitechDaily, AI-Powered Bad Actors: A Looming Threat for 2024 and Beyond)


If we want to use some system, we must know its good and bad things. And only complete knowledge of the system makes it safe. We must realize that all tools we create have positive and negative ways to use. So everything is not black and white. When people are concerned about their privacy, the argument against them is that for example, the use of faked identities can help to search for pedophiles and drug dealers. 

In the same way, privacy protection helps other criminals hide. But the same tools are effective in the hands of the people like Kim Jong-Un. The user of the system determines what is the purpose of the system. When people are concerned about AI and their privacy, we must notice that the same people are not worried about things like firearms. 

Firearms protect their homes, but for some reason, the people's privacy must be so strong, that things like Mafia can use it as a shield against authorities. In the same way, Chinese and North Korean governments create AI that can be used as a surveillance tool for governments. The same tools that are used to create animations can used to create fake information. The same systems that are used to track pedophiles can be used to track opposition.  When AI makes many good things, we must realize that AI is not only a good thing. 


The AI is also a looming actor in the hands of bad actors. The bad actors can use AI to create cyber attacks and propaganda tools. Generative AI is one of the tools that can give tools for advanced cyber attacks in the hands of the actors who have not advanced technical and programming skills. The opposite argument is generative AI can also create tools that can fight against malicious software. Another problem is that AI is a "perfect tool". It can see if a person lies just following the body language.

And the AI is an ultimate tool for searching and following things like stock markets. Cyber attacks against those AIs can turn them against their users. In refined attack modes, the attacker tries to corrupt the AI, like trying to involve code that will put the lie detector off, when it sees some mark. If the attacker just destroys the AI, that thing is visible. 

Generative AI makes it possible to create complicated tools that can manipulate the data, that AI gives. In some models, attackers will use the AI that observes stock marketing to introduce certain companies better than they are. So AI drives money to those companies. A couple of years ago this type of attack would have been impossible.

But today generative AI makes it possible to create complicated and refined tools. That allows the system to make data injection into the files. That the AI uses. This is the reason, why the system must observe size and writing day all the time. The easiest way to corrupt the AI is simply to change some files from its code. And that's why those system's security must guaranteed. 


Generative AI can create ultrarealistic animations that bad actors can use as propaganda tools. That misinformation is hard, or almost impossible to separate from authentic images. 


Another thing is that bad actors can use AI to create false information. The AI is an excellent propaganda tool. And bad actors can use AI-based tools to create dis- and misinformation like fake film material. The AI can create faked films with photorealistic images like animations. And that kind of tool makes it possible to create photorealistic animation, there is used AI-created images. 

Another problem is AI-created images. Those images make it possible to create photo- or ultrarealistic animations. And those animations can used as propaganda tools. The AI can create realistic-looking films. And it can manipulate that character's voice and way of talking so that it looks like some real person. 

Ultrarealistic animations can used to destroy people's reputations. This kind of system can also manipulate truth as much as its users want. Those systems are excellent tools for creating disinformation. And if somebody wants to deliver disinformation they need that disinformation to deliver. Ultrarealistic animations can used in many things. In that kind of technology, news reporters can interview people, who are already dead. 


https://scitechdaily.com/ai-powered-bad-actors-a-looming-threat-for-2024-and-beyond/


https://learningmachines9.wordpress.com/2024/01/29/the-ai-is-an-excellent-tool-for-cyber-and-propaganda-operations/


Saturday, January 27, 2024

New technology revolutionizes robotics and AI.



Machine learning is automatized object-oriented programming.


In object-oriented programming, the programmer handles libraries or objects. Those libraries are the pre-created program bites. And the programmer just interconnects those objects into one entirety. In object-oriented programming tools, some commonly needed objects are like commands.

However, the libraries that the programmer loads to the top of the editor involve most of the program. The AI-based programming tools can create programs by using descriptions that the programmer gives using spoken language. In machine learning the machine itself interconnects those libraries into the new entireties.

AI and AI-controlled systems require lots of computing power. Algorithms are complicated and heavy to drive. When we think about the always-changing environment use of static algorithms is impossible. It's not possible to create algorithms that fit all situations. This is the reason why dynamic morphing algorithms are the tools that make the next-generation AI more powerful.

The system automatically interconnects the algorithm's "objects" together in morphing algorithms. There are proto-algorithms in the computer's memory. And then the computer or AI interconnects those objects. That thing makes it possible to create an unlimited number of objects that the system can connect into new entireties.



"Researchers have developed a soft fluidic switch using ionic polymer artificial muscles, capable of operating at ultra-low power and producing a force 34 times its weight. This breakthrough offers potential applications in soft robotics, biomedical devices, and microfluidics by precisely controlling fluid flow in narrow spaces. The image above depicts the separation of fluid droplets using a soft fluid switch at ultra-low voltage. Credit: KAIST Soft Robotics & Intelligent Materials Lab" (ScitechDaily, Scientists Develop Artificial Muscle Device That Produces Force 34 Times Its Weight)


Artificial muscles.


Researchers created artificial muscles, that can lift 34 times their weight. This kind of system allows developers to make muscles for human-looking robots, that might look as realistic as humans. Those artificial human cyborgs can operate in situations that are stressful for humans.

In "Alien" movies the artificial humans keep contact between spaceships and crew. The purpose of those robots was to follow and observe humans who work onboard.

In the same way, things like the military and law enforcement planned to use human-looking robots for surveillance and covert missions. Artificial muscles can also used in prosthetics and operative "silicone muscles".



"The research from Linköping University introduces a new approach to processing conjugated polymers using benign solvents such as water. The new inks are also highly conductive. Credit: Thor Balkhed" (ScitechDaily, Beyond Silicon: New Sustainable Method for Creating Organic Semiconductors)


The organic semiconductors allow the connection of living neurons with microchips.


The organic semiconductor can operate as an independent system. Or it can transmit information between living neurons and silicon microchips. Organic microchips are more environmentally friendly than silicone microchips. The organic microchips can covered with proteins and antigens that fit with the receiver's body.

Those systems do not activate immune cells. That allows the system like implanted microchips be safer than before. The problem with those implants has been the immune system. The implanted microchips can control things like prostheses, but that system can also operate as a BCI (Brain Computer Interface). That allows a person to communicate with things like mobile telephones.



"Researchers have developed a novel synaptic transistor that mimics the human brain’s integrated processing and memory capabilities. This device operates at room temperature, is energy-efficient, and can perform complex cognitive tasks such as associative learning, making it a significant advancement in the field of artificial intelligence. Credit: Xiaodong Yan/Northwestern University" (ScitechDaily, Revolution in AI: New Brain-Like Transistor Mimics Human Intelligence


The brain can control the robot's body through a microchip that is in the neck. The motion nerve sends signals through that microchip to the computer, that controls the robot body.


The new transistors revolutionize AI.


The new type of AI mimics brains. This kind of system allows the AI with a modular structure. The modular structure makes it possible to control the AI's abilities. The biggest problem with AI and humans is that the AI works as an interactive system. So the use of AI requires interaction with users and the language models.

When we think of systems that translate spoken words to commands that computers understand, we face one problem. Things like noisy environments and bad articulation can turn spoken commands into things. That the computer doesn't understand. The system that encodes spoken words to computers works like this.

Spoken words will sent to speech-to-text application. Then the system transfers that text into the form that the computer understands. And the Achilles heel of that system is articulation. If the articulation of the controller is not good, the system cannot make code as it should.

This kind of system can be a good programming tool if the operators check the text before they send it to the code generator. Physical robot control requires more accurate and faster methods. If a robot operates in a natural environment. It requires fast and precise reactions. In those cases the system has no time to wait, the operator checks the text, that the system delivers to the command system.


https://scitechdaily.com/beyond-silicon-new-sustainable-method-for-creating-organic-semiconductors/


https://scitechdaily.com/revolution-in-ai-new-brain-like-transistor-mimics-human-intelligence/


https://scitechdaily.com/scientists-develop-artificial-muscle-device-that-produces-force-34-times-its-weight/


https://learningmachines9.wordpress.com/2024/01/27/new-technology-revolutionizes-robotics-and-ai/

Friday, January 26, 2024

Photons, and speed of light. And electromagnetic shadows.



There is no absolute reality.


In Qbism is no absolute reality or absolute truth. That radical philosophical idea means that an agent or actor creates reality. Or the actor interacts with the environment and forms his version of reality. That means there is a frame in the environment, but finally, our brain or observer gives the final form for reality. 

Reality is a unique and abstract model of reality, and the way how individual people see reality is unique. That means we have free will to make things dead or alive in the quantum world. If we keep ourselves silent and don't speak about something, that thing keeps that forbidden thing away from our partner's knowledge. The knowledge about the existence of the subject brings that subject alive to us. 

If we don't know about the existence of something that thing is dead to us. In imagination, we have free will to make things, that are not possible. QBism means that the observer and reporter will participate in forming the reality. That means if we talk about doors, we might forget to tell if it's a wooden door. Or we might forget to tell you about the color of the door. Even if people don't need that information, they might not talk about that thing to others. Everybody has some kind of vision about what the door looks like. 

The problem with traditional universe theories is this: nothing can travel in an absolute vacuum. In bubble theory also known as superstring theory, the universe is full of superstrings that are extremely thin energy fields. There are always small bubbles or quantum vacuums in those superstrings. And when those bubbles fall, they form dark energy. 

The reason why, a photon is the only particle that can reach the speed of light in a straight universe is that the electromagnetic shadow behind it is so small. That electromagnetic vacuum will stretch until it pulls particles into spaghetti. And when a particle turns too small, it separates itself from quantum fields. That causes a situation where particle starts to transport energy to its environment. If the system can fill that electromagnetic shadow behind the particle that thing allows it to cross the speed of light.

Dark energy and its role in bubble theory vacuum cannot form energy from anything. But it can collect energy from its environment. 

In that model, dark energy comes from extremely small bubbles that collect energy from the environment. And then those things form dark energy. Or they collect free energy from their environment, increasing its power. Those bubbles do not create energy from nothingness, they just collect smaller superstrings and then impact them together. 

When we think about the superstring models, we can say that superstrings are acting like all other particles or strings. When those extremely thin magnetic fields travel in electromagnetic fields, they harvest energy. And those superstrings act like laser elements. 

The kinetic energy is energy transfer into the particle or some other object. Kinetic energy is energy, that a particle collects into itself when it travels through quantum fields. When a particle travels through a quantum field energy transfers in it until the particle's energy level turns higher than the environment. 

When we think of the case of that particle turning to spaghetti near a black hole we can think of the quantum vacuum. Or quantum shadow that gravitational radiation creates behind the particle turns it into spaghetti. The thing that can deny the spaghetti effect is that something fills the quantum shadow behind the particle. 

Photon gets energy stability at the speed of light. At that speed, photons deliver as much energy as they get. If some other thing reaches the speed of light, there is the possibility that this thing turns into a cosmic perpetual motion machine. Because an object gets as much energy as it delivers it cannot slow its speed. 

Photons can turn to wave moment and back to the particle. That is one of the reasons why photons are the only particle, that can reach the speed of light. Photon or its wave movement form is so thin that the electromagnetic shadow behind it is smaller than other particles. When some other particle starts to accelerate. And reach the speed of light. There is an electromagnetic shadow or vacuum behind it. And that shadow will turn longer and longer when the particle accelerates. 

Then that particle turns into spaghetti, and in a critical moment, the electromagnetic shadow behind it pulls the particle into the form that looks like spaghetti. In that process, the quantum fields lose contact with particles. And in that moment particle sends electromagnetic radiation. That electromagnetic shadow behind the particle slows the acceleration. Electromagnetic wave movement escapes from the particle to the shadow that follows it. And that thing slows particles. 

When a particle with mass impacts the medium, that medium transports energy to that particle. The energy travels through it and the radiation that jumps forward pushes the particle back. But the energy that transfers to the particle when it hits things like water molecules will sometimes cause an effect where the particle's speed crosses the speed of light in the medium. The neutrino detector uses this thing to detect blue light flashes that come from neutrino that deliver its kinetic energy. 


https://bigthink.com/13-8/qbism-quantum-reality/


https://theconversation.com/qbism-quantum-mechanics-is-not-a-description-of-objective-reality-it-reveals-a-world-of-genuine-free-will-200487


https://en.wikipedia.org/wiki/Quantum_Bayesianism


https://learningmachines9.wordpress.com/2024/01/26/photons-and-speed-of-light-and-electromagnetic-shadows/

What if somebody copies the mechanic computer's structure to the quantum computers?



 What if somebody copies the mechanic computer's structure to the quantum computers? 


The mechanic computers are immune to EMP pulses. And that thing makes them interesting, even if they are old-fashioned systems. The small nanotechnical mechanic computers can be used as backup systems for simple, one-purpose systems. 

Nanotechnology makes it possible to create very small mechanical components. And it's possible. The small mechanic computers can assist the digital computers in cases where EMP (Electromagnetic pulse) damages digital computers. 

Digital computers are more effective and multi-use than mechanic computers, and that's why they replaced mechanic computers. But it's possible. That mechanic computers work as background systems, for special cases. 


(Wikipedia, Colossus computer)

Colossus


When we think about digital computers the first electric "computer" before ENIAC was Colossus, the top secret code-breaking machine. They used electric wires and electric processing systems.  Allies used Colossus to create fake and false information for the German commanding system during the Normandy disembarkation. The machine that was used to break German Lorenz Enigma encryption was an electromechanical system called Bombe. 

The Colossus was the first programmable computer in the world. That system was in use until the 1960's. The Colossus was the first machine that allowed to read opponent's messages and in vital moments of WWII to deliver disinformation to the enemy commanders. 

The pin, or camera system, was created for the Colossus. an be used to turn binary data into qubits. In the Colossus program were the small holes in paper that traveled between the lamp and photocells. Today the system can share the data into the bites and then send it to photoelectric cells. 

Then that data can travel as lines in those data handling lines. This kind of structure can repeat one after one. The system can share data with smaller and smaller bites in the system. There are more and more adjacent data handling lines. 

In quantum computers, the quantum entanglements can create similar structures as cogwheels made in mechanic computers. If researchers can create enough complex 3D quantum systems. It makes it possible to create a 3D structure. 

Their quantum entanglement transports information using similar tracks with mechanic computers. And that thing will make it possible to create new types of quantum solutions. 

The historical connection with Bombes and "Colossus" to quantum computers is similar. In those systems, history repeats itself. The quantum computers are the bombes of today. Users cannot preprogram quantum computers. 


And still today the user uses quantum computers through digital computers. There is no way to use quantum computers straight through the keyboards. 


Then we can think about mechanic computers. Especially, "Colossus", was the fundamental system.  It's possible to make a quantum version of "Colossus". That theoretical system would be the 3D quantum entanglement structure that follows the drawings of the "Colossus". 

In those systems, small skyrmions can used for the same purpose as radio tubes. The system can create as an example, a virtual triode (Three electrodes) radio tube by making three data input/output points in skyrmion. In a photonic model, the system can use a laser ray that travels in a ring-shaped structure. Then the laser rays will be aimed at three points of that laser ray. 

The most incredible version of the mechanic computer's digitalization could be the structure where small black holes are put in the form that mimics the bombe's wheel structure. Then quantum entanglement between those black holes will transmit information in the system, just like cogwheels transport information in mechanic computers. 


https://en.wikipedia.org/wiki/Bombe


https://en.wikipedia.org/wiki/Colossus_computer


https://en.wikipedia.org/wiki/Cryptanalysis_of_the_Lorenz_cipher


MIT researchers created a sensor that harvests energy from its environment.



"This energy management interface is the "brain" of a self-powered, battery-free sensor that can harvest the energy it needs to operate from the magnetic field generated in the open air around a wire. Credit: Courtesy of the researchers, edited by MIT News" (/news-media/self-powered-sensor-automatically-harvests-magnetic-energy)

There is nothing new about sensors that harvest energy from the sunlight. The thing that makes the new sensor fundamental is that it can also operate in complete darkness. This system makes it possible for employers to make sensor installations in narrow places, where is hard to pull wires. 

Because these kinds of sensors can operate in darkness, researchers can use the same technology to create the power sources for the miniature robots. That new technology makes those robots able to operate in areas. Where there is no sunlight. 

The new sensor is fully battery-free. It can harvest its energy from the environment. The difference between solar-panel systems is that this system uses vibrations and electromagnetic fields as energy sources.  And that means it's the ultimate tool for making sensors that observe things like diesel engines. 

Because this new sensor can operate in darkness,  it's easy to install. The same technology that is used in this tiny sensor can used in radio transmitter-recevers. 

That allows eavesdropping systems that are independent of the battery. Even if those energy harvesters can harvest only low voltages they can store energy into capacitors. And then that energy can used in remote-control systems. The ability to collect energy from the environment is an interesting thing. Nanorobots can use this technology as their energy source. 

In the same way, that kind of thing can used for nano-size microchips. In those systems, a wireless system transports data to the computing system wirelessly. The system uses the same radio waves as the power source. The problem with nanotechnical systems is that electricity jumps over their tiny switches. And that requires new ways to transport electricity and information to them. 


https://meche.mit.edu/news-media/self-powered-sensor-automatically-harvests-magnetic-energy


https://news.mit.edu/2024/self-powered-sensor-harvests-magnetic-energy-0118


Thursday, January 25, 2024

Researchers got the first evidence of vacuum decay.

 


"Scientists from Newcastle University, as part of an international team, have made a groundbreaking discovery by providing the first experimental evidence of vacuum decay. This achievement, pivotal for understanding the early universe and fundamental physics, was observed in a supercooled gas near absolute zero and has set the stage for further research in quantum field phenomena." (ScitechDaily, Unlocking Quantum Mysteries: Scientists Produce First Experimental Evidence of Vacuum Decay)


Researchers got the first evidence of vacuum decay. 


Vacuum decay is a situation where the vacuum itself decays in two vacuums. There is no confirmation about that thing. But theoretically, it's possible that. 

Electromagnetic impulses can split a vacuum into two pieces. There are no confirmed cases of vacuum decay. But vacuum decay is where the vacuum decays in two pieces, and if the size of those two vacuums rises as big as the original vacuum was. 

That thing can cause an effect that quantum fields around the vacuum fall in it. That thing forms a situation where those quantum fields reflect from the middle of those two vacuums. And that reflection from the falling quantum fields makes a thing, called vacuum energy possible. It's possible that. The electromagnetic radiation that hits the quantum vacuum can split it into two vacuums, and then make them as big as the original vacuum 


Vacuum energy. 


Vacuum cannot form energy. But annihilation between particle-antiparticle-pairs can form it.  When wave movement falls in a quantum vacuum, it impacts in the middle of it with wave movement. That comes from the opposite side. That thing forms the Schwinger effect, where wave-particle duality forms a particle-antiparticle pair that can annihilate immediately. That annihilation forms energy. 

And a vacuum can harvest energy. Vacuum forms particles and virtual particles from impacting wave movement that falls inside it. In that case, the virtual particles, or energy whirl in a vacuum and harvest energy from their environment. That thing is called the Casimir effect. 

The Casimir effect forms virtual particles between two layers that are close to each other. Those virtual particles are like energy bridges. And when outside energy flow affects them, that thing raises their energy level and makes them collect energy from around them.  


If we think pressure or gas vacuum gas always travels to it. If there is a windmill at the edge of the vacuum that gas flow is easy to move to energy. 


There are no absolute vacuums in the universe. All vacuums are so-called false vacuums. That means at least energy or wave movement falls in those vacuums. There that impacting wave movement impacts to each other. Those impacts are from virtual particles. And wave-particle duality forms particle-antiparticle pairs. 

That thing means that particle-antiparticle-pair annihilation creates energy in the vacuum. Also, interaction with virtual particles like skyrmions and other particles can form energy, or they can transfer energy to other particles. 

The quantum vacuum acts like a vacuum bomb. It collects standing waves in the middle of it. And when the power of standing waves rises higher than the environment. That thing turns the direction of wave movement. That means a vacuum can increase the power of energy. 

The quantum vacuum itself can create energy. The process goes like this. The vacuum pulls wave movement in the quantum vacuum. That wave movement stops for a short moment in the bubble. During that moment this standing wave packs energy. And when the energy level rises high enough, that standing wave will start to travel out from the vacuum. 

A vacuum can harvest energy from space around it. When quantum fields fall in the vacuum,  they impact each other. That thing turns the vacuum act like a vacuum bomb. And that thing where quantum fields impact each other can make a vacuum to harvest energy from its environment. 

Sometimes is introduced that vacuum energy, or energy that vacuum decay releases could be the dark energy's source. The fact is that. Vacuum energy forms when electromagnetic fields fall in a vacuum and then reflect from each other. In that process, kinetic energy travels between those EM fields. And that process increases its energy. 

https://scitechdaily.com/unlocking-quantum-mysteries-scientists-produce-first-experimental-evidence-of-vacuum-decay/


The AI's Achilles heel.

 


"University of Copenhagen researchers have proven that fully stable Machine Learning algorithms are unattainable for complex problems, highlighting the critical need for thorough testing and awareness of AI limitations. Credit: SciTechDaily.com" (ScitechDaily,AI’s Achilles Heel: New Research Pinpoints Fundamental Weaknesses)



The AI's Achilles heel.


The environment always changes. And that means the AI must have the ability to morph itself. The static algorithms do not fit everywhere. If we think of things like self-driving cars, the working sites require a little bit different actions than highways. The AI should also have winter, fog, and bad weather modes.

Thinking that static algorithms fit everything is wrong. The system requires flexibility and morphing abilities to answer the challenges. The same system cannot do everything.

One of the biggest problems with AI is that it doesn't think, as we know "thinking". AI just collects information from certain areas. That could be university, governmental, and Wikipedia sources. That means some errors in those home pages cause problems with the AI. If the homepage is marked as "trusted", the AI handles it as trusted.

Another thing that causes problems with AI is that people have too big expectations about it. The language model that people see is the user interface that routes information for the sub- or back-office applications that work backward. Those applications are different programs, and the language model just gives orders to them.

The AI has limits. In the same way, all other systems have their limitations. Even humans have limitations. All people cannot drive cars. When we think about the car that dives autonomously, that system requires two AI systems. The first AI is the language model that allows command of the system. The second stage is the complicated program, that drives the car from point A to point B.

In practical solutions making programs, how the car should drive from point A to point B is much harder than making programs how the ICBM missile must fly. The ICBM requires two points the beginning point and the target point. The beginning point for its trajectory comes from the GPS. Then it flies to the target by using a ballistic trajectory.

But the car must react to the animals and humans, that come in front of it. The "green T-skirt problem" means a situation where it's programmed to follow the traffic lights. The green light means "go". But how the system can react to people? Who has a green T-shirt or green spot on their clothes? In the worst case, the AI translates this thing as "go". That's why the autopilot is used in cars.

Should limited to highways. And in those cars should be GPS that connects autopilot in city areas. In traffic, there are so many variables that the autopilot and programmers never notice everything.

The use of autopilots in city areas should limited to small-size vehicles. The food-delivery robots are normally like small cars. But in the future, those duties could transferred to human-looking GP (General Purpose) robots.

Another problem with AI is that it sees things precisely as they are. The same thing that makes face identification successful makes robots hard to operate in normal environments. That thing means that if a robot knows how to open a metal door, but comes to the front of a wooden door, the robot will not recognize that thing as a door.

In fuzzy logic, the system knows the wireframe of the door. When it sees the thing that looks like a door it simply takes the handle. And pushes and pulls it. We all push doors that we should pull and otherwise. Same way, the robot can test the direction of how the door opens.

If a robot transports things, like food delivery, it uses the GPS to navigate to the right house. Then it sees the door searches the door phone or calls the customer. Then it comes in and searches the floor. Then the system starts to use precise logic to search door number and name.

Walking on the streets is a complicated thing for robots. Robots must know how to act in traffic lights and how to open doors. Things like food delivery offer good things to test the AI algorithms.

But when we use AI, we must realize that these kinds of complicated systems are not thinking. They just collect information. The AI corrects things when it faces something unknown it will transfer the image to a human operator. That system will just send an image to the screen in front of the operator and say that it requires actions.


https://scitechdaily.com/ais-achilles-heel-new-research-pinpoints-fundamental-weaknesses/


https://learningmachines9.wordpress.com/2024/01/25/the-ais-achilles-heel/

Tuesday, January 23, 2024

Can AI think like humans?




"Athanassios S. Fokas argues that AI, despite its advancements, is still far from matching human thought, as it lacks the ability to fully replicate the complexity of human cognition, including emotions, creativity, and unconscious processes." (ScitechDaily, Can Artificial Intelligence Think Like a Human?)


Can AI think like humans?


The AI is the biggest revolution in history. That means it comes to stay. And even if we hate that thing, we must start to live with it. When AI generates answers, it will use the Internet as a database. How well the AI can connect and search data and form the network determines its accuracy.

In the past, AI was a chess program. And researchers used those programs to test their algorithms. People thought that humans were, invincible until Deep Blue AI won over chess master Garry Gasparov. The fact is that Gasparov won the series. But the Deep Blue won a couple of matches. There have been lots of advances in AI since those days. That happened in 1997.

Today there are lots of more powerful computers and the advanced Internet that allows computers to drive heavier and more complicated code than ever before.

Today AI is much more than some chess computers. Chess is an easy thing to model for computers. The game area is limited, and buttons have straight regulations. But if we want to make robots that operate in everyday missions, robots must have more complicated programs than some chess programs.

When AI selects data sources, it uses certain algorithms for making that selection. The AI doesn't know what reads on those home pages. But it also can compare information about the internal sections of the home pages with other sources with similar topics. So when the AI makes mistakes, those mistakes are in sources.

But are we afraid the AI because it's too perfect?


Do we fight against AI, because it's dangerous? Or are we just afraid of it, because we think of that thing as a competitor?

The AI is not perfect. It's more effective as a coder than humans. The AI-controlled drone can resist stronger G-forces than manned aircraft. And the AI has no feelings. That means it is never angry. It's never sad, and things like public opinion do not affect AI and its decisions. Or if programmers want the AI can let people vote about the solutions that it has.



"New research addresses the risks and liabilities associated with implementing AI in the food industry, proposing a temporary adoption phase to assess AI’s benefits and challenges, and emphasizes the need for more research on legal and economic structures." (ScitechDaily, The Paradox of Perfection: Can AI Be Too Good To Use?)


But does AI think?


The fact is that the AI is not yet thinking. And if we want to make the AI, that thinks like humans, we have two choices.

1) We can make database solutions where there are billions of databases. The human brain involves about 100 billion neurons. The synapse connections increase the number of those neurons.

Interconnecting neurons that system can create virtual neurons and make database combinations. That ability makes it possible. That there could be more than 250 billion combinations of the databases.

So AI that thinks like humans requires 100 billion microchips with at least 100 billion databases, that this system can connect.

2) We can make a computer that uses living neurons. That kind of hybrid computer is more effective than before.


Image: Deep Blue versus Garry Kasparov - Wikipedia

In normal cases, the AI is a language application that turns human commands into models. That computer understands. Researchers can put this kind of language model into the microchip kernel. And that thing makes it possible for the next-generation computers can follow spoken commands.

As I wrote many times before in those applications, the language model plays a key role. It is at the center of all applications. That language model is used to command the computer in the center of other applications.

The language model can use a speech-to-text application to drive spoken commands to the application. That kind of AI has a model where it can search for safe and confirmed information. There is a list of the file descriptions that deliver trusted and confirmed data. Those things are governmental and university homepages.

After that, the AI collects homepages and connects them to texts and data in a certain order. There is also a model of how the AI can make correct text that has a good vocabulary. So the system knows predicates, subjects, and other things. That is required to make fluent text.

But some other types of AI react to situations. That kind of AI tool sees that something happens it comples that thing with things. That is stored in its memory. If there is a match the AI makes the act. That connected to that action. These kinds of AIs are things that used to take thieves. Or the AI, that controls drones can make escape and evasion movements when it sees incoming missiles or AA-flak.

The AI seems like thinking. But it just collects information from the Internet. And then it connects that information. The AI can make the answer with very high accuracy. And how good-looking answers the AI can give depends on the data mass that it can use. Also, things like polite answers are making AI seem human.

If we think that the AI-controlled robot is programmed to answer to puch with puch, that thing makes it human. The robot can also say that it hurts when its sensors detect a punch. That is hard enough. The robot can also call the police if some vehicle hits it. But those robots don't think.

Thinking AI requires the living neurons. Neuro-computers that use living neurons can think. The biocomputer always thinks. But those neurons require information that they connect. Thinking is simple connecting databases and making the new networked entireties about them.

When we think that some AI turns against humans, we might use things like EMP weapons against it. The EMP weapons are usually quite harmless to people, but they destroy the electric components. AI is a tool that will make many things different than before. But it can serve humans.

The thing with AI is that we must not let AI think for us. When we talk about military applications. We must realize that modern warfare is more complicated and changing than ever before.

Things like GPS-guided bombs are predicted, to be easy to jam. But reality is another, and at least Russians have not successfully cut the GPS signals. In the Western world, we think, that the military should protect the land and its people. We must realize that the military is a brutal world. Its purpose is afraid of enemies. And then we must realize, that what we define is the solution that we make good or bad.


https://scitechdaily.com/can-artificial-intelligence-think-like-a-human/


https://scitechdaily.com/the-paradox-of-perfection-can-ai-be-too-good-to-use/


https://learningmachines9.wordpress.com/2024/01/23/can-ai-think-like-humans/

The pocket-sized AI and humanoid robots are the ultimate compilation.

  The pocket-sized AI and humanoid robots are the ultimate compilation.


The human-looking robots are the next-generation GP (General Purpose) tools.


The pocket-sized AI and humanoid robots are the ultimate compilation.


AI means a language model that can translate spoken commands to computer programs. And it's possible. That the AI can create morphing program entirety for robots. That kind of morphing program module environment means that the robot can make its missions in multiple conditions and turn the robots more flexible. Than ever before.

In AI-based systems, the center of the system is the language model. The language model is the tool, that turns spoken words into commands. That computers understand. The language model simply transforms spoken words into algorithms that computers use to control robots and other systems. The language model makes it possible.

The system can make customized computer programs in real-time. The AI follows orders that the user gives, and then it creates new programs or modules for computers or robots, this ability means that the AI can also delete those programs when it doesn't need them anymore.





The user can connect this kind of AI device to the computer using the USB. Or wirelessly, using a BlueTooth connection.

The BMW starts to test human-looking robots in their assembly lines. Those human-looking robots can use the same tools as humans. And they can make morphing networks. Those systems can use central computers or the robots can make morphing networks with each other. In that network, robots can share their data and computer capacity over the network. And that allows the robots can operate as a unit. That means large groups of robots can operate in their entirety.

Portable AI or systems that involve language models that can connect with those systems allow to make fast changes in those robots' programming. AI- or language models that can control robots and use spoken speech could be game-changers in that kind of technology. Robots can do many things that are not possible for humans. The human-looking robots are also tools. That is interesting about researchers, space, and deep-sea explorers, and military personnel.

The human-looking robots can perform surgical operations. In those cases, human operators oversee those operations. Those remotely controlled robots can bring doctors to very remote places. The robot is only a body, and its programs determine its use of the same robot.

That operates as a gardener can operate as a surgeon if its control programs are changed. Same way robots that clean floors can be reprogrammed to combat robots. And by using man-shaped robots every single aircraft in the world can turn into a robot plane. This means also large-size old aircraft can used for kamikaze missions.


https://www.freethink.com/robots-ai/general-purpose-robots


https://www.indiatoday.in/technology/news/story/rabbit-r1-the-cute-little-pocket-size-viral-ai-device-that-can-do-everything-for-you-2487278-2024-01-11


https://learningmachines9.wordpress.com/2024/01/23/the-pocket-sized-ai-and-humanoid-robots-are-the-ultimate-compilation/

A pulsar that orbits a black hole makes it possible to test relativity. And uncover Einstein's enigma.

A pulsar that orbits a black hole makes it possible to test relativity. And uncover Einstein's enigma. 


"An artist’s impression of the system assuming that the massive companion star is a black hole. The brightest background star is its orbital companion, the radio pulsar PSR J0514-4002E. The two stars are separated by 8 million km and circle each other every 7 days. Credit: Daniëlle Futselaar (artsource.nl) (ScitechDaily, Einstein’s Enigma: How a Mysterious Cosmic Object in Milky Way Could Test Relativity Like Never Before)

All objects in the universe are gravitational centers. But gravitation is not the only force in the universe. It's dominating and interacts over long distances. The size of the gravitational center determines how far gravitation can interact. Things like planets are gravity centers, or actually, they are groups or entireties of gravitational waves. 

Objects or atoms and subatomic particles that form planets and other entireties receive electromagnetic or quantum radiation. Those objects take that radiation into their quantum fields, and sooner or later those particle's energy level turns higher than the environment. Then they send radiation that pushes them away from each other. Quantum gravitation means that the gravity field around single atoms and subatomic particles is very weak. The electromagnetic reflection from those atoms and particles destroys material and entireties sooner or later. 

Expansion of the universe decreases the universe's energy level all the time. And that thing makes sure that energy travels out from particles. Energy travel or material vaporization happens also inside the objects. And that thing causes a situation where radiation that comes in the entirety pushes the outer level outside. The universe's expansion guarantees that energy flows out from material and continues all the time. And that energy rips material into pieces and turns it into wave movement. 


PSR J0514-4002E: pulsar that orbits the black hole. 


A new object allows researchers to test relativity better than any time before. That new and interesting object is a radio pulsar PSR J0514-4002E. And the thing that makes this pulsar interesting is that it orbits a black hole. The black hole pulls radiation from the pulsar. And allows researchers to measure the curvature of spacetime. That thing can open roads to measure the mysterious effects like gravity and dark matter. That strange binary star also gives information about the time dilation. 

That object also makes it possible to research how gravitational waves act in its environment and other particles that orbit black holes. The thing is that the gravitational waves can reflect and they can push each other from their track like electromagnetic wave movement makes. So gravity is like light or radio waves. 

The curvature of spacetime means that there is a "gravitational pothole" in space. The thing that makes the time dilation is the denser quantum fields, and when some object falls into a black hole, that gravitational pothole makes quantum fields pump more energy into the object. And that energy transfer is the thing, called time dilation. When the object travels in the gravitational pothole, gravity causes effect, where those quantum fields touch it longer, and transport energy into it. 


"Potential formation history of the radio pulsar NGC 1851E and its exotic companion star. Credit: Thomas Tauris (Aalborg University / MPIfR)" (ScitechDaily, Einstein’s Enigma: How a Mysterious Cosmic Object in Milky Way Could Test Relativity Like Never Before)


A black hole is a gravitational center just like planets and stars. But it's more massive. 


The black hole is the gravitational center. Just like all other gravitational centers like Earth, Jupiter, and the Sun. But in black holes, those quantum fields are denser than in regular gravitational objects. We can say that black holes and other gravity centers are like onions. How tightly packed those quantum fields are determines the strength of gravity. 

In black holes that onion is extremely tightly packed quantum fields around the nucleus of that object. When those quantum fields oscillate they form an electromagnetic vacuum between them. And that vacuum pulls electromagnetic fields to the black hole. So we can say that gravity is an effect that affects the environment. The environment called spacetime is an electromagnetic quantum field that pulls particles into the gravitational center like a river takes garbage with it. 


The gravity field is one of the quantum fields, just like electromagnetic fields are. 


When a supernova explodes there forms an electromagnetic vacuum around that object. Then other electromagnetic (or quantum) fields from outside that explosion press that bubble black in the place, where an explosive star has been before. Then those quantum fields form a structure that looks like an onion. 

Also, that crush turns the size of those particles. It pushes electrons and all quarks into one entirety. The wave movement or quantum fields that come from outside causes oscillation in that "gravity onion". And that oscillation sends gravity waves. When a black hole sends gravity waves it loses part of its mass. 

The outcoming quantum fields keep the black hole in its form. There could be a gravitational or electromagnetic tornado in the black hole, that transports energy out from it. Because that quantum tornado pulls quantum fields out from the black hole's rotation axle. That means that the structure acts like a thermal pump. The reason why that material can escape from the black hole is that. The gravity field at the event horizon is stronger than inside the black hole. 

The effect is similar to the case, where we would fall into gas planets like Uranus. The massive gravity around that planet forms when the planet, and its atmosphere pull objects as an entirety. But when we are on the solid core of that planet, gravity would be lower than on Earth. 

When an object travels in a gravitational tornado the internal quantum fields of the black hole pump energy in it, and because gravity is force. That affects to environment. The object can escape from a black hole because the speed of light is relative. That means the speed of light is always relative to the speed of the environment. An object's speed about the speed of quantum fields determines the speed of particles that travel in them. 



"A zoom into the globular cluster NGC 1851 followed by an orbital simulation showing the original pulsar – white dwarf binary being disrupted by the arrival of a massive third body of unknown nature. The new arrival kicks the white dwarf out of orbit and captures the pulsar for itself, forming a new binary system with a pulsar in orbit around, most likely, either a light black hole or a supermassive neutron star. Credit: OzGrav, Swinburne University of Technology" (ScitechDaily, Einstein’s Enigma: How a Mysterious Cosmic Object in Milky Way Could Test Relativity Like Never Before)


Every single particle or object in the universe, from gluon to planets, is the gravitational center. Things like planets are entireties of the gravitational sub-centers. 


The reason why gravity is stronger at the edge of a black hole is that a black hole pulls objects as an entirety. When objects fall in the black hole they form independent gravity centers inside the event horizon. That means black holes are not as solid and homogenous as people think. 

Just like all particles form independent gravity centers in the gravity center called Earth, similar particles form gravitational centers around and in the event horizon. Those gravitational centers form internal gravitational waves inside objects. 

Gravitational waves also can reflect from each other like all other wave movements. And there is a possibility that gravitational waves that reflect from those particles and objects suppress each other. 

When an object travels behind the event horizon to the middle of the black hole, there are fewer objects in front of it when it closes the black hole's core. 

In black holes, the event horizon sends gravity waves also in the black hole. Those gravity waves impact in the middle of it, then they reflect to the event horizon. 

When radiation travels through that gravity onion it reaches its nucleus. In the middle of the black hole. That radiation will pack until it reaches a higher energy level than its environment. The thing is that when the object closes the heart of a black hole it faces a situation, where there is less black hole or material ahead of it. That means the gravity level in the middle of a black hole is lower than at the edge of the event horizon. 


The gravitational effect is always the same. But the strength of that effect changes. And that's why researchers can use gas planets like Uranus to make a model of how gravity interacts in the black hole. 


To prove that thing. We must think about an object or planet where we can dive. What kinds of objects are gas planets? Gas planets are planets with massive atmospheres that are around solid core. So when we dive into the gas giant. We would dive into the planet. And we can use that model, to make models of how gravity works in massive objects. 

The idea is that gravity always has the same form. And it affects objects in the same way. But the strength of that field is different. But without depending on the strength, gravity fields always act the same way. 

The thing that drives particles forward is the quantum field or wave movement that comes from backward. The effect where gravity fields weaker when we fall into an object is known from the gas giants like Uranus. Uranus has a massive gravity field when we look it out from its atmosphere. Outside the Uranus gas and solid material pull objects as an entirety. 

But if we dive into that massive gas giant's atmosphere, and fall to its solid shell, their gravity would be lower than on Earth. The reason for that is outside the planet's atmosphere planet and its giant atmosphere pull objects as an entirety. But if we fall into that planet, or its massive atmosphere there is less gravitational mass ahead of the object. But the pressure of the atmosphere is massive. 


https://scitechdaily.com/einsteins-enigma-how-a-mysterious-cosmic-object-in-milky-way-could-test-relativity-like-never-before/


Monday, January 22, 2024

Quantum computers can be smaller than atoms. And a new way to control biorobots.

 Quantum computers can be smaller than atoms. And a new way to control biorobots. 


"Emission of a single photon in the Maxwell fish-eye lens. Credit: Oliver Diekmann (TU Wien)" (ScitechDaily.com/Quantum Ping-Pong: The New Era of Atomic Photon Control)

Nanotechnology requires new types of computers. The nanomachines can control genetically engineered bacteria by turning DNA plasmid like a rudder. 

But let's begin with quantum ping-pong. The DNA plasmids also can act as chemical qubits. 



"Maxwell fish-eye lens with two atoms. A photon (green) is traveling between the two atoms along the curved light rays (white). Credit: Oliver Diekmann (TU Wien)" (ScitechDaily.com/Quantum Ping-Pong: The New Era of Atomic Photon Control)


Quantum ping-pong is the new atomic way to control light. 


The ability to control light or photon flow at the single-atom- or subatomic level brings new ways to make quantum computers. Theoretically, quantum computers can be proton- or neutron-sized. The system transports information through the electron shells to quarks. And then the quantum system puts quarks into the superposition and makes quantum entanglement between them. There is also the possibility of making quantum computers using superpositioned and entangled electrons on the orbitals around the atom's core. 

That thing allows us to make a new type of effective, atom-size quantum computer with two or three parts. And that thing turns those atom-size quantum computers into the "iron-based AI". The electrons and quarks inside protons and neutrons form three different entirety. If the system does not use quarks as the superposition, it can create superposition and entanglement between protons and neutrons as entireties. 

That kind of atom-size quantum computers can revolutionize nanotechnology. If those miniature quantum computers are connected with nano-size microchips that thing allows us to make independently operating nanomachines. The atomic-size components allow the system to drive complicated code. And then there could be miniature thermos bottles, where those qubits exist. 



The chemical qubits and biorobots. 


It's possible to create a ring-shaped protein. And then on that protein is the small magnesite bites. Then the system drives data to those magnesite bites. And after that, the computer transports this enzyme to the gate. And if there are eight magnesite contact layers the system can drive data that is cut in eight pieces into the eight wires. And each of those wires is one state of qubit. 


Plasmid wheel as a qubit. 


This kind of thing can act as a chemical qubit. If there are bacteria that can create electric impulses the synthetic DNA or RNA can make these bacteria make the electric impulses, which allows them to send data to the non-organic binary computer. There is the possibility that the nano-size computer and DNA factory make the DNA encoded the data, and then the system turns that DNA into a plasmid wheel. 

Then it cuts that plasmid into four (or more bites) and transports them into the four bacteria. Then those bacteria send information into the lines, and each of those (in this case) four lines is one state of the qubit. These kinds of systems can used to control biorobots. 



There could be a series of DNA plasmids in genetically engineered bacteria. In that case in electric bacteria, individual DNA can operate each of those strings and electric impulses that those bacteria send. Those artificial organisms can be the gate between biocomputers and regular binary computers. The DNA factory can encode data into the DNA form. And then the artificial electric bacteria can encode that data for the computer systems. 

Biorobots and nanotechnology. Nanomachine can turn DNA plasmid like a small rudder. 


Biorobots are genetically manipulated bacteria. The DNA plasmid controls the bacteria's operations. The nanomachine that turns the DNA plasmid into the right position can drive bacteria in the direction, where controllers want. The DNA plasmid is like a rudder that the nanomachine turns. 

Programming the bacteria can be based on the synthetic DNA or RNA bites. Those bites can connected to the protein or enzyme. The enzyme is like a long tape that normally catalyzes acid reactions. The reason why some enzymes make that thing is simple. There are acid molecules on the enzyme, and when it moves above the layer, that tape-looking molecule brings fresh acid molecules to the point. 

There are other ways how enzymes can catalyze the acid reactions. But this kind of enzyme tape can also carry DNA bites to the right point of the bacteria. If those wheel-looking DNA molecules are transported near the nanomachine. That turns the DNA plasmid, the nanomachine can destroy the plasmid and then take the new one from the enzyme tape. 


https://scitechdaily.com/quantum-ping-pong-the-new-era-of-atomic-photon-control/


https://learningmachines9.wordpress.com/2024/01/22/quantum-computers-can-be-smaller-than-atoms-and-a-new-way-to-control-biorobots/


Sunday, January 21, 2024

The new technology allows researchers to see how memory and cognition happen in real-time.

 The new technology allows researchers to see how memory and cognition happen in real-time.



"A research team developed SynapShot, a novel technique for real-time observation of synapse formation and alterations. This breakthrough, allowing live monitoring of synaptic changes in neurons, is expected to transform neurological research and enhance understanding of brain functions. Credit: SciTechDaily.com"(ScitechDaily.com, SynapShot Unveiled: Observing the Processes of Memory and Cognition in Real Time)




Figure 1. To observe dynamically changing synapses, dimerization-dependent fluorescent protein (ddFP) was expressed to observe flourescent signals upon synapse formation as ddFP enables fluorescence detection through reversible binding to pre- and postsynaptic terminals. Credit: KAIST Optogenetics & RNA therapeutics Lab (ScitechDaily.com, SynapShot Unveiled: Observing the Processes of Memory and Cognition in Real Time)




"Figure 2. Microscopic photos observed through changes of the flourescence of the synapse sensor (SynapShot) by cultivating the neurons of an experimental rat and expressing the SynapShot. The changes in the synapse that is created when the pre- and post-synaptic terminals come into contact and the synapse that disappears after a certain period of time are measured by the fluorescence of the SynapShot. Credit: KAIST Optogenetics & RNA therapeutics Lab"

The text and images above are from: SynapShot Unveiled: Observing the Processes of Memory and Cognition in Real Time (scitechdaily.com)


XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX


Dimerization-dependent fluorescent proteins (ddFP) protein make the real-life brain hack possible.


The protein called dimerization-dependent fluorescent proteins or ddFP are things that can unveil the synapse actions in real-time. That protein tells how the brain, memory, and cognition act in real time. This kind of thing can connect with AI-controlled systems, and the next-generation mind-reading instrument is ready. The system can cooperate with generative AI.

And that thing makes the system see how cognition and memory work. This kind of information is important when we think about the possibility of creating BCI-controlled applications. Those applications can make it possible to transfer memories between people. In some visions, those systems can also transfer people's memories on the computer screens. But this kind of brain-reading tool can also bring animals' memories to the laptop screen.

The ability to see how memory and cognition work in brains helps the researchers to create cloned neurons, that can have their memories. The memories contain the skills that humans have. Researchers can make the cloned neurons that they can use to fix neural damage. But the problem is this. When a neuron is destroyed all skills and other memories. Stored in it are gone. This means something must put memories into the neuron transplant.

The ability to transfer cloned neurons to fix damaged neural structures is a good idea. However, the neural transplant must involve the same skills and memories stored in destroyed neurons. The ddFP protein is too good candidate for things that the system uses to control that process.

The real fundamental thing is that those sensors can see what animals think. That thing can open new ways to communicate between species.

The new brain scanners open the gate to see, what kinds of dreams animals see, while they dream. The information about the animal's dreams helps us to understand why we sleep lots of time while we are alive. But the knowledge about the animal's thoughts helps us create better conditions for animals, and protect wildlife.


https://scitechdaily.com/synapshot-unveiled-observing-the-processes-of-memory-and-cognition-in-real-time/ 


https://learningmachines9.wordpress.com/2024/01/21/the-new-technology-allows-researchers-to-see-how-memory-and-cognition-happen-in-real-time/

Researchers can use information about the "diamond rain" on icy planets to form industrial diamonds,

 Researchers can use information about the "diamond rain" on icy planets to form industrial diamonds,


"The graphic shows the diamond rain inside the planet, which consists of diamonds sinking through the surrounding ice. Pressure and temperature continuously increase on the way deeper inside the planet. Even in extremely hot regions, the ice remains due to the extremely high pressure. Credit: European XFEL / Tobias Wüstefeld" (ScitechDaily.com, “Diamond Rain” on Icy Planets: Unlocking Magnetic Field Mysteries)


"A new study reveals that “diamond rain” on icy planets like Neptune and Uranus forms under less extreme conditions than previously believed. This phenomenon influences the planets’ internal dynamics and magnetic fields and could also occur on smaller exoplanets." (ScitechDaily.com/“Diamond Rain” on Icy Planets: Unlocking Magnetic Field Mysteries)

The diamond rain on icy planets can tell about the magnetic field. The researchers can use that information about the formation of diamond rain on Uranus and Neptune about the production of small-size diamonds, that are useful as antennas in nanotechnology and new types of optical microchips.  

The new information tells us that diamonds form because of the combination of pressure, gravity, and magnetic field. And that thing means it's possible. That researchers can form those conditions in the laboratory. The key element of diamond rain is that it forms in lower pressure or upper atmosphere than previously thought. 

"A new experiment suggests that this exotic precipitation forms at even lower pressures and temperatures than previously thought and could influence the unusual magnetic fields of Neptune and Uranus."  (ScitechDaily.com/“Diamond Rain” on Icy Planets: Unlocking Magnetic Field Mysteries)

It's possible. That researchers can copy conditions. That is on the layer where diamonds form in the laboratory chamber. If that thing is possible, the researchers can create a new way to make artificial diamonds. Previously that technology required high pressure and temperature. And that thing meant that the diamond production required high-pressure chambers.

But if "cold" technology is possible, that thing can make a new way to create industrial diamonds. Maybe the next-generation chamber for the artificial diamond is the "X-shape" wind tunnels where methane or ammonia gases flow crossing and then those molecule impacts will reduce carbon. And maybe those carbon atoms can start to collect carbon atoms from the gas flow. That involves hydrocarbon. 

It's also possible that the magnetic or laser systems can press methane or some other hydrocarbon atoms together. And then that thing removes hydrogen. Then the gas flow with those proto-diamonds starts to reduce carbon from other hydrocarbon molecules. That thing starts the growth of the carbon crystals. 

The nanocrystals can used as a stylus for scanning tunneling microscopes. The atom can hover between the layer and stylus, whose tip is the size of one carbon atom. That system can scan layers with extremely high accuracy. 

The form of those raining diamonds is interesting because they are suitable for nanotechnology. In nanotechnical antennas diamonds conduct electromagnetic radiation or pressure waves in photoacoustic systems. In photoacoustic systems, the laser rays make oscillations in the carbon atoms. And that oscillation is visible as the sound waves. That system can transmit data to the system, or it can move small particles on the layer. 


https://scitechdaily.com/diamond-rain-on-icy-planets-unlocking-magnetic-field-mysteries/


https://en.wikipedia.org/wiki/Scanning_tunneling_microscope


https://learningmachines9.wordpress.com/researchers-can-use-information-about-the-diamond-rain-on-icy-planets-to-form-industrial-diamonds/


The mathematical work that shakes the world.

"As a graduate student, Maryam Mirzakhani (center) transformed the field of hyperbolic geometry. But she died at age 40 before she coul...