Showing posts with label details. Show all posts
Showing posts with label details. Show all posts

Wednesday, June 18, 2025

Researchers found interesting things in brains.




Researchers found new interesting details in human brains. First, our brains transmit light. And that light gives an interesting idea: could brains also have optical and photon-based ways to transmit data between neurons? And if neurons have that ability, how effective and versatile it is. We know that there are no unnecessary things in our brains. So that means the light in our brains must have something to do with neurons. But do neurons use that way to transmit complicated information or is it meant only for cleaning neuro-channels? 

That interesting light causes the question: does that effect have some connection to light that people see when they visit near death? When we think about death the neuro channels will empty from neurotransmitters and electric phenomena. That means our nerves are more receptive to those signals than usual. So could that light have some kind of interaction between neurons or axons? Is there some point in neurons that reacts to that ultra-weak photon emission, UPE? 

Does our own neural activity cover that light below it? There is an observation that dead organisms shine dimmer light than alive. And maybe that light turns dimmer when the creature closes to death. There is an article in the Journal of Physical Chemistry Letters, “Imaging Ultraweak Photon Emission from Living and Dead Mice and from Plants under Stress” that introduces ultra-weak photon emission from dead mice and plants turn dimmer. 

And the question is this: can humans see that ultra-weak photon emission and its changes subliminally? The article says that all living organisms shine weak light that disappears when a creature dies. Also, things like mammals shine IR-radiation but the main question is can the ultraweak photon emission, UPE happen on purpose, or is it some kind of leak? And can humans see that phenomenon but that observation cannot reach our consequence? 

There are two things that self-learning systems must do to become effective. Effectiveness means that the AI or human brains should ignore irrelevant information. If that thing doesn’t happen, it grows databases and data mass in the system. When the system makes a decision, it must select the right data from the data that it has. And then the system must make decisions using relevant data. This makes the situation problematic. The system must decide what kind of data it needs in the future. 

And that is quite hard to predict. When we learn something we cannot be sure do we need those skills anymore. Maybe we don’t need skills that we learn in the military ever after that. But as we see, the future is hard to predict. The other thing that the AI must do is to adjust its processor's actions like human brains do. In human brains, brain cells have multiple frequencies in oscillation. Scientists say that those differences in oscillation frequency are to avoid rush hours in axons. 

That thing means that brain cells give time to clarify axons. Because brain cells have different frequencies that make it possible to control axons and deny the situation that multiple neurons send data into the same axon at the same time. Those multiple rhythms allow the brain to avoid rushes in axons. The same thing can make a fundamental advance in technology. If we think about the situation that the system that runs AI doesn't have a controller that includes system architecture that makes processors operate a little bit at different times that causes a situation that all processors send data at the same time to the same data gate. That causes a rush and makes the system jam immediately. In the electric systems the system that uses electric impulses for data transmission. 

Processors. That operates in the same moment. Can form standing waves in the data channel. And that burns the system. There are many interesting details in the human brain. That thing opens visions that maybe brains might also have an optical way to transport information. Researchers try to find out the purpose of that light. And if they find a point in, or on neurons that react to that light, they find a new level in their brains. Another interesting detail is that different parts of the same neurons learn in different ways. That means that the neuron itself can be more intelligent and versatile than we thought. 


https://neurosciencenews.com/hippocampus-neuron-rhythm-29277/


https://www.psypost.org/different-parts-of-the-same-neuron-learn-in-different-ways-study-finds/


https://www.psypost.org/neuroscientists-discover-biological-mechanism-that-helps-the-brain-ignore-irrelevant-information/


https://pubs.acs.org/doi/10.1021/acs.jpclett.4c03546


Tuesday, May 13, 2025

The new quantum gravity model takes us closer to the Theory of everything, TOE.




"A quantum theory of gravity would clear the path to answering some of the biggest questions in physics. Credit: SciTechDaily.com" (ScitechDaily, Gravity’s Quantum Secret: “Theory of Everything” Could Unite the Forces of Nature)


The question of the form of gravity is interesting. Gravity is the only known force that affects light. And that makes it special. Another thing is that gravity seems to have no pushing force. This is the thing that makes gravity interesting. If we think that all particles have gravity fields and all objects like planets are entireties of those particles that means that gravity fields are the sum of those particle's gravity fields. 

Every particle is the quantum dot or quantum gravity center. The reason why the Earth-size black hole has a stronger gravity field than Earth is that there are more quantum dots in the same size object than the planet. So, the black hole can form around the structure of the quantum dots that are very close or melted together. The thing that determines the power of gravity fields is the density of those quantum gravity dots. So, we can turn any object into a black hole by pushing those quantum gravity dots together. 

That means if we think about the Theory of everything, TOE, we can make the model where the distance of those quantum dots determines which of fundamental interactions, Strong- and weak nuclear forces, electromagnetism, or gravity be in case. We can make a model where the superstrings of the extremely thin energy tube travel through those quantum gravity points. Those strings could be extremely small wormholes can also be one of the reasons why gravity acts as it acts. If something goes and cuts those strings they start to pull those things through them. 

The idea of the wormholes is that they must be extremely long and that the energy that travels through them can keep them open. The extremely long wormhole makes things travel in time. The expansion of the universe makes the other end of that energy tube a so high-energy level that the future has a lower energy field that can pull information through it. The information flow must be so strong that the outside energy cannot press that energy tube together. 

Finnish researchers from Allto University made models that should connect quantum field theory and Einstein's theory of relativity. That thing is the great step to the Theory of Everything, TOE. This model is interesting and those researchers published that in their publishings. You can find that thing in the article that is lined below this text. One of the most interesting questions of the gravity and its special nature is this: 


Can there be two or maybe three forms of gravity? 


1) Gravity that has the field effect. In that case, gravity is the field that travels into the gravity center. The gravity center or spinning particles can bind those fields inside them. And that means the system rolls those fields inside them. The field is like a river that takes everything with it. 

2) Gravity might also have a wave effect. That means gravity waves can have the shape that makes objects fall to the gravity center. And if there is a step, or false vacuum before the bottom of the gravity wave that makes objects fall to the gravity center, like I just wrote. 


The idea is that the gravity waves are at lower energy levels or they are deeper if the observer goes closer to the gravity center. The reason for that can be a similar phenomenon with the false vacuum decay. Or the gravity wave can stretch which makes it lower. That tells why outcoming energy that tries to fill the gravity wave is like a ditch that pushes particles into the gravity center. 

There might be a limit. Or the increasing mass turns the gravity waves less dominant than the field. The small mass objects can send gravity waves, but when those object's mass is rising that field pulling turns more dominating. The high-mass object simply rotates or binds a quantum field around it like dough. But does that thing cause only because massive object sends gravity waves so often? 

The last model that we can mention is virtual gravity. In some versions things like electromagnetism cause the virtual gravity effect. But another version of the virtual gravity is the particle the upper side has a higher energy level than the bottom. If that particle starts to rotate like wing gear. That was used in some early steamships. The idea is that the particle gets energy from up. And if it rotates horizontally that thing forms the energy wave below the particle. That energy wave pushes that particle forward. 


https://scitechdaily.com/gravitys-quantum-secret-theory-of-everything-could-unite-the-forces-of-nature/

 

Tuesday, January 10, 2023

Can Chat GPT replace Google? The story of "productive" AI.

  

Image: Pinterest


Machine learning is the ultimate tool. But we are still far away from systems that are learning like a human. In the simplest model. Learning machines record the environment, and then the human user just stores those things for future use. Then the next time, the AI can automatically adjust the system by using those recorded values. So humans are needed for making the pilot case. And then, the system just repeats that case in the cases that follow the first one. 

When we think of AI as an artist. We can select a couple of paintings from the net for the AI's database. Then the AI can simply share those images with three or more areas. And that it will connect those areas to the "new" painting. There is a possibility. That the number of stripes depends on the number of selected paintings or other images.  

And then, the AI will take the top area from the first painting, the second area from the second painting, etc. So AI is not producing anything new. It just recycles the old one. 

Chat GPT-type artificial intelligence will take Google's position someday in the future. The thing that this system requires is a good database, and also it must learn to read. When people are using Chat GPT they accumulate its database about the homepages. But the problem with the AI is that it doesn't know what is reading on those home pages. It can use Google search and find a series of home pages about certain topics. But I think that still today. 

It uses the homepages  Google indexed or some other search engine. Then it simply connects the paragraphs from the page and makes new homepages by using that data. When we want to simulate the AI way to handle data in a real-life physical environment. We can give the keywords to some child who writes them to address the line. 

Then that child who doesn't speak English selects five first pages from the list. And then that child copies the first paragraph from the first page. The second paragraph from the second page, the third paragraph from the third page, etc... That thing means that the result can be perfect. But it can also be horrible. 

The thing that makes AI problematic is that it uses only homepages that are enlisted for it. The system can search certain words from those pages and select paragraphs there are involving most of those words. But the problem is that the system cannot read. So the results still need human controllers. Or they are full of fatal errors. And I would not use AI for my doctoral thesis. 

The AI can check the computer's programming code. It can see are variables used. Or it can see are variables written right. In that case, it just compiles the list of variables and the line that the programmer is made. If there is a variable, that looks like some certain variable, the system suggests how the programmer should write the variable's name. 

In that process, the AI searches for details from those variables. It will see if there are similarities in the names and text that is written in the editor. This is one of the reasons why all variables must have unique names that are easy to separate from each other. 

When AI controls things like chemical processes there are many ways how to make that system learn. The simplest model is to make the chemical test in the laboratory. And the system just records the conditions like relations in gas mixtures and temperature in the chemical environment. 

If the maker of the experiment is satisfied that person just stores the result on the computer. And then the system can automatically adjust the system by using those values for the next case. But if the system uses scanning laser microscopes or something like that it can follow the trajectories of the components. 


https://shorttextsofoldscholars.blogspot.com/

What was before the Big Bang (Part II)

 What was before the Big Bang. (Part II) "Our universe could be the mirror image of an antimatter universe extending backwards in time....