Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Science
Hannah Devlin Science correspondent

Paralysed woman able to ‘speak’ through digital avatar in world first

A severely paralysed woman has been able to speak through an avatar using technology that translated her brain signals into speech and facial expressions.

The advance raises hopes that brain-computer-interfaces (BCIs) could be on the brink of transforming the lives of people who have lost the ability to speak due to conditions such as strokes and amyotrophic lateral sclerosis (ALS).

Until now, patients have had to rely on frustratingly slow speech synthesisers that involve spelling out words using eye tracking or small facial movements, making natural conversation impossible.

The latest technology uses tiny electrodes implanted on the surface of the brain to detect electrical activity in the part of the brain that controls speech and face movements. These signals are translated directly into a digital avatar’s speech and facial expressions including smiling, frowning or surprise.

“Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others,” said Prof Edward Chang, who led the work at the University of California, San Francisco (UCSF). “These advancements bring us much closer to making this a real solution for patients.”

The patient, a 47-year-old woman, Ann, has been severely paralysed since suffering a brainstem stroke more than 18 years ago. She cannot speak or type and normally communicates using movement-tracking technology that allows her to slowly select letters at up to 14 words a minute. She hopes the avatar technology could enable her to work as a counsellor in future.

The team implanted a paper-thin rectangle of 253 electrodes on to the surface of Ann’s brain over a region critical for speech. The electrodes intercepted the brain signals that, if not for the stroke, would have controlled muscles in her tongue, jaw, larynx and face.

After implantation, Ann worked with the team to train the system’s AI algorithm to detect her unique brain signals for various speech sounds by repeating different phrases repeatedly.

The computer learned 39 distinctive sounds and a Chat GPT-style language model was used to translate the signals into intelligible sentences. This was then used to control an avatar with a voice personalised to sound like Ann’s voice before the injury, based on a recording of her speaking at her wedding.

The technology was not perfect, decoding words incorrectly 28% of the time in a test run involving more than 500 phrases, and it generated brain-to-text at a rate of 78 words a minute, compared with the 110-150 words typically spoken in natural conversation.

However, scientists said the latest advances in accuracy, speed and sophistication suggest the technology is now at a point of being practically useful for patients.

Prof Nick Ramsey, a neuroscientist at the University of Utrecht in the Netherlands, who was not involved in the research, said: “This is quite a jump from previous results. We’re at a tipping point.”

A crucial next step is to create a wireless version of the BCI that could be implanted beneath the skull.

“Giving people the ability to freely control their own computers and phones with this technology would have profound effects on their independence and social interactions,” said Dr David Moses, an assistant professor in neurological surgery at UCSF and co-author of the research.

The findings are published in Nature (paywall).

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.