"Noland Arbaugh, a 29-year-old paralyzed patient involved in the Neuralink trials, navigates an electronic chess game played on a large screen." (Interesting Engineering, Watch: Musk shares Neuralink’s ‘telepathy’ breakthrough in human trials)
The Neuralink brain implant seems working well, and the person who took the microchip in his brain introduces his chess-gaming skills. That means the microchip is working well. Those microchips can used to control almost all types of wireless equipment. The brain-implanted microchips give the paralysis patients more quality of their life.
And maybe they can control things like exoskeletons, so-called "wearable robots", with their EEG signals. Those robots allow those persons to move around the hospitals. Or maybe on the streets. The technology is becoming more advanced, and the path to those systems will be similar with pacemakers. In the past, pacemakers were the last chance for myocardial infarction patients.
Today those systems are common auxiliary systems for people who suffer cardiac problems. And maybe quite soon we see those Neuralink- or some other systems in people's heads. To make those systems operate perfectly, they must collect information about the EEG, and those systems can make people happier. If they are trapped in beds.
The new types of systems and raw materials allow to creation of new and more effective systems. The next-generation neural chips are coming, and graphene-based technology doesn't activate the immune system. Those graphene-based communication chips allow people to make many things. That they ever made before.
The microchip-controlled mobile systems allow that person can click themselves into the internet. The person will turn into a singularity with the internet and network-based artificial intelligence. The microchip can make that connection itself. Or it can connect itself to a mobile telephone or computer. The ultimate BCI (Brain Computer Interface) that gives electric stimulations to the brain shell's sensory lobes makes the situation that the person will not make a difference between reality and virtual reality.
The brain-implanted microchips allow the system to transmit EEG. And that means we can share and receive thoughts between people. And that thing turns those people into the singularity. The person can control robots like external bodies and the AI-based system can communicate with human-looking robots or drones. The person can transmit a sense of touch and even emotions to another person. And that thing is one of the most interesting and feared things in the brain- or neuro-implanted microchips.
Artificial intelligence's seven steps and the BCI
New types of BCI systems giving us new thoughts about AI development. The classic mode of the famous seven steps of the AI are.
Stage 1 – Rule-Based Systems
Stage 2 – Context Awareness and Retention
Stage 3 – Domain-Specific Expertise
Stage 4 – Reasoning Machines
Stage 5 – Self-Aware Systems / Artificial General Intelligence (AGI)
Stage 6 – Artificial SuperIntelligence (ASI)
Stage 7 – Singularity and Transcendence
https://technologymagazine.com/ai-and-machine-learning/evolution-ai-seven-stages-leading-smarter-world
More accurate descriptions for that list are later in this text. The question is can we or AI developers jump over one stage? Can we step from domain-specific expertise to singularity and transcendence?
In philosophical transcendence, the experience is unique and individual, without depending on a group or species. If we think that we live in virtual reality, we might mean the step that the AI stimulates our brain with the synthetic EEG. That stimulation that passes our natural senses makes a person unable to separate those synthetic stimulations from reality.
That kind of ultimate virtual world allows a person to fly. Or it makes technical remote-view possible. In technical remote view, the person who is connected or trapped in internet-based VR singularity will connect the brain to the remote surveillance cameras or drones. And that allows the person to fly over the mountains or be like fly on the roof.
https://interestingengineering.com/innovation/watch-musk-shares-neuralinks-telepathy-breakthrough-in-human-trials
https://scitechdaily.com/revolutionary-graphene-interfaces-set-to-transform-neuroscience/
https://technologymagazine.com/ai-and-machine-learning/evolution-ai-seven-stages-leading-smarter-world
https://en.wikipedia.org/wiki/Transcendence_(philosophy)
****************************************************************
The seven steps of AI (from Technology magazine)
Seven stages in the future evolution of Artificial Intelligence
To provide some clarity on the possible development path, we see seven distinct stages in the evolution of AI’s capabilities:
Stage 1 – Rule Based Systems – These now surround us in everything from business software (RPA) and domestic appliances through to aircraft autopilots. They are the most common manifestations of AI in the world today.
Stage 2 – Context Awareness and Retention – These algorithms build up a body of information about the specific domain they are being applied in. They are trained on the knowledge and experience of the best humans, and their knowledge base can be updated as new situations and queries arise. The most common manifestations include chatbots- often used in frontline customer enquiry handling - and the “roboadvisors” that are helping with everything from suggesting the right oil for your motorbike through to investment advice.
Stage 3 – Domain Specific Expertise – These systems can develop expertise in a specific domain that extends beyond the capability of humans because of the sheer volume of information they can access to make each decision. We have seen their use in applications such as cancer diagnosis. Perhaps the most commonly sighted example is Google Deepmind’s AlphaGo. The system was given a set of learning rules and objective of winning and then it taught itself how to play Go with human support to nudge it back on course when it made poor decisions. Go reportedly has more moves than there are atoms in the universe – so you cannot teach it in the same way as you might with a chess playing program. In March 2016, AlphaGo defeated the 18-time world Go champion Lee Sedol by four games to one.
The following year, AlphaGo Zero was created, and given no guidance or human support. Equipped only with her learning rules, she watched thousands of Go games and developed her own strategies. After three days she took on AlphaGo and won by 100 games to nil. Such applications are an example of what is possible when machines can acquire human scale intelligence. However, at present they are limited to one domain and currently, AlphaGo Zero would forget what she knows about playing Go if you started to teach her how to spot fraudulent transactions in an accounting audit.
See also:
Humans, machines and the rise of AI
Samsung Research to open three new AI centres
Microsoft acquires conversational AI startup Semantic Machines
Stage 4 – Reasoning Machines – These algorithms have a “theory of mind” - some ability to attribute mental states to themselves and others e.g. they have a sense of beliefs, intentions, knowledge, and how their own logic works. Hence, they have the capacity to reason, negotiate, and interact with humans and other machines. Such algorithms are currently at the development stage, but we can expect to see them in commercial applications in the next few years.
Stage 5 – Self Aware Systems / Artificial General Intelligence (AGI) - This is the goal of many working in the AI field – creating systems with human like intelligence. No such applications are in evidence today, however some say we could see them in as little as five years, while others believe we may never truly achieve this level of machine intelligence. There are many examples of AGI in the popular media ranging from HAL the ship computer in 2001 A Space Odyssey, to the “Synths” in the television series Humans. For decades now, writers and directors have tried to convey a world where the machines can function at a similar level to humans.
Stage 6 – Artificial SuperIntelligence (ASI) – This is the notion of developing AI algorithms that are capable of outperforming the smartest of humans in every domain. Clearly, it is hard to articulate what the capabilities might be of something that exceeds human intelligence, but we could imagine ASI solving current world problems such as hunger and dangerous climate change. Such systems might also invent new fields of science, redesign economic systems, and evolve wholly new models of governance. Again, expert views vary as to when and whether such a capability might ever be possible, but few think we will see it in the next decade. Films like Her and Ex Machina providing interesting depictions of the possibilities in a world where our technology might outsmart us.
Stage 7 – Singularity and Transcendence – This is the notion that the exponential development path enabled by ASI could lead to a massive expansion in human capability. We might one day be sufficiently augmented and enhanced such that humans could connect our brains to each other and to a future successor of the current internet. This “hive mind” would allow us to share ideas, solve problems collectively, and even give others access to our dreams as observers or participants. Taking things a stage further, we might also transcend the limits of the human body and connect to other forms of intelligence on the planet – animals, plants, weather systems, and the natural environment. Some proponents of the singularity such as Ray Kurzweil, Google’s Director of Engineering, suggest that we could see the Singularity happen by 2045 as a result of exponential rates of progress across a range of science and technology disciplines. Others argue fervently that it is simply impossible and that we will never be able to capture and digitise human consciousness.
https://technologymagazine.com/ai-and-machine-learning/evolution-ai-seven-stages-leading-smarter-world
Comments
Post a Comment