Researchers in California have achieved a significant success with an AI-powered system that restores the natural speech to paralyzed individuals in real time, using their own voices, especially displayed in a clinical test participant who is severely paralyzed and cannot speak.
The innovative technology developed by teams of UC Berkeley and UC San Francisco combines brain-computer interface (BCI) with advanced artificial intelligence to decode nervous activity in audio speech.
Compared to other recent efforts to make speech from brain signals, this new system is a major advancement.

AI operated mechanism (Kayalo Littlezone, Cheol June Cho, et al. Nature Neuroscience 2025)
how it works
The system uses devices such as high -density electrodes arrys that record nerve activity directly from the brain surface. It also works with microelectrods that enters the brain surface and non-inventory surface electromography sensors on the face to measure muscle activity. These devices tap in the brain to measure nerve activity, which AI then learns to turn into the voice of the patient’s voice.
Neuroprosthesis takes the brain motor cortex, sampling nerve data from the region controlling speech production, and AI decodes that data in speech. According to the co-Leid writer Cheol June Cho, neuroprosthesis prevents indications where the idea is translated into articulation and, in the middle, in the middle, motor control.

AI operated mechanism (Kayalo Littlezone, Cheol June Cho, et al. Nature Neuroscience 2025)
AI enables man to control robotic arm with signs of brain
Major progress
- Real time speech synthesis: The AI-based model gives a sensible speech from the brain in nearly-global times, addressing the challenge of delay in speech neuroprosthes. This “streaming approach brings the same rapid speech decoding capacity of devices such as Alexa and Siri,” according to Gopala Enumanchipalli, co-integral investigator StudyThe model 80-MS decodes nerve data in increment, which enables uninterrupted use of decoders, moving forward.
- Natural speech: The purpose of technology is to restore natural speech, allowing more fluent and expressive communication.
- Personal voice: AI is trained using his own voice before their injury, which produces audio that they think. In cases where there is no residual tone of patients, researchers use a pre-tricked text-speaking model and the patient’s east-chot voice, which is to fill the missing details.
- Speed and accuracy: The system may begin to decod the signs of the brain and start output the speech within a second of the patient trying to speak, a significant improvement from the delay of eight-second in the previous study from 2023.
What is Artificial Intelligence (AI)?

AI operated mechanism (Kayalo Littlezone, Cheol June Cho, et al. Nature Neuroscience 2025)
Exoskeleton helps people achieve freedom
Control
One of the major challenges was when the patient had no residual vocalization, then there was mapping of nerve data for speech output. Researchers overcame it using a pre-tested text-speaking model and the patient’s pre-chopped sound to fill the missing details.
Get Fox Business when you click here

AI operated mechanism (Kayalo Littlezone, Cheol June Cho, et al. Nature Neuroscience 2025)
How to Work Elon Musk’s neurlink brain chip
Impact and future instructions
This technique has the ability to significantly improve the quality of life for people with paralysis and conditions like ALS. This allows them to communicate their needs, express complex ideas and connect more naturally with loved ones.
“It is exciting that the latest AI advances are very sharp to BCI for practical real world use in the near future,” said UCSF Neurosurgeon Edward Chang.
The next stages include intensifying the processing of AI, making output voice more expressive and synthesized speech to discover how to include tone, pitch and loudness variations. Researchers purpose is to decod paralysis characteristics from brain activity to reflect changes in tone, pitch and loudness from brain activity.
How to subscribe to Kurt’s YouTube channel for quick video tips to work all your technical equipment
Kurt’s major takeaways
It is really surprising about this AI that it does not translate the signs of the brain only in any kind of speech. This is a target for natural speech using the patient’s own voice. This is like giving them back their voice, which is a game changer. This gives new hope for effective communication and renewed connections for many individuals.
What do you think government and regulatory bodies should play under the supervision of the development and use of brain-computer interfaces? Write us and tell us Cyberguy.com/Contact.
Click here to get Fox News app
For my tech tips and security alert, subscribe to my free cybergui report newsletter Cyberguy.com/newsletter.
Ask Kurt a question or tell us which stories you want to cover us.
Follow Kurt on your social channels:
Answers to the most asked cyber questions:
New from Kurt:
Copyright 2025 cyberguy.com. All rights reserved.