Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Zenger
Zenger
Jim Leffman

Paralyzed Woman Speaks Again Thanks To Revolutionary New Technology

Ann suffered a brainstem stroke when she was 30, leaving her paralyzed. NOAH BERGER VIA SWNS.

A paralyzed woman has spoken again after her brain signals were intercepted and turned into a talking avatar complete with facial expressions in a world first.

Ann, 48, suffered a brainstem stroke when she was 30, leaving her paralyzed.

Scientists implanted a paper-thin rectangle of 253 electrodes onto the surface of her brain covering the area critical for speech.

These intercept ‘talking’ brain signals and are fed into a bank of computers via a cable, plugged into a port fixed to her head.

The computers can decode the signals into text at a rate of 80 words a minute.

It then uses an audio recording of her at her wedding before the stroke to reproduce her voice and then places it on an avatar that includes facial expressions.

“It is the first time that either speech or facial expressions have been synthesized from brain signals.” said the team from the University of California – San Francisco. 

Along with colleagues from the University of California – Berkeley, they used artificial intelligence to produce the brain-computer interface (BCI).

Dr. Edward Chang, chair of neurological surgery at UCSF, who has worked on the technology for more than a decade, hopes the breakthrough will lead to a system that enables speech from brain signals in the near future.

“Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others,” said Dr. Chang.

“These advancements bring us much closer to making this a real solution for patients.”

Had they not been intercepted by the electrodes, the signals from her brain would have gone to muscles in her tongue, jaw and larynx, as well as her face.

Ann suffered a brainstem stroke when she was 30, leaving her paralyzed. NOAH BERGER VIA SWNS.

For weeks, Ann, who did not want to reveal her surname, worked with the team to train the system’s artificial intelligence algorithms to recognize her unique brain signals for speech.

This involved repeating different phrases from a 1,024-word conversational vocabulary over and over again until the computer recognized the brain activity patterns associated with the sounds.

Rather than train the AI to recognize whole words, the researchers created a system that decodes words from phonemes.

These are the sub-units of speech that form spoken words in the same way that letters form written words. “Hello,” for example, contains four phonemes: “HH,” “AH,” “L” and “OW.”

Using this approach, the computer only needed to learn 39 phonemes to decipher any word in English. This both enhanced the system’s accuracy and made it three times faster.

Sean Metzger, who developed the text decoder in the joint Bioengineering Program at UC Berkeley and UCSF said: “The accuracy, speed and vocabulary are crucial.

“It’s what gives a user the potential, in time, to communicate almost as fast as we do, and to have much more naturalistic and normal conversations.”

The team developed an algorithm to reproduce the sound of her voice using the wedding recording and animated the avatar using software that simulates and animates muscle movements of the face.

They also developed a customized machine-learning process that allowed the company’s software to mesh with signals being sent from her brain.

This converted them into the movements on the avatar’s face, making the jaw open and close, the lips protrude and purse and the tongue go up and down, as well as the facial movements for happiness, sadness and surprise.

Graduate student Kaylo Littlejohn added: “We’re making up for the connections between the brain and vocal tract that have been severed by the stroke.

Ann uses a digital link wired to her cortex to interface with an avatar. NOAH BERGER VIA SWNS.

“When the subject first used this system to speak and move the avatar’s face in tandem, I knew that this was going to be something that would have a real impact.”

The team are now working on a wireless version that will mean the user doesn’t have to be connected to the computers.

Co-first author Dr. David Moses, an adjunct professor in neurological surgery, said: “Giving people the ability to freely control their own computers and phones with this technology would have profound effects on their independence and social interactions.”

The current study, published in the journal Nature, adds to previous research by Dr Chang’s team in which they decoded brain signals into text in a man who had also had a brainstem stroke many years earlier.

But now they can decode the signals into the richness of speech, along with the movements that animate a person’s face during conversation.

In a separate study, also published in Nature, another method has been devised to allow a disabled patient to ‘speak’ in text.

Pat Bennett, 68, a former human resources director and onetime equestrian who jogged daily developed amyotrophic lateral sclerosis, a neurodegenerative disease that will eventually leave her paralyzed.

She was left unable to speak but now has had four baby-aspirin-sized sensors implanted in her brain by a team from Stanford Medicine.

The devices transmit signals from two speech-related regions in her brain to state-of-the-art software that decodes her brain activity and converts it to text displayed on a screen.

The sensors are components of an intracortical brain-computer interface, or iBCI.

UCSF clinical research coordinator Max Dougherty connects a neural data port in Ann’s head. NOAH BERGER VIA SWNS.

Combined with state-of-the-art decoding software, they’re designed to translate the brain activity accompanying attempts at speech into words on a screen.

The scientists trained the software to interpret her speech and after four months, her attempted utterances were being converted into words on a computer screen at 62 words per minute.

This was more than three times as fast as the previous record for BCI-assisted communication.

Dr. Jaimie Henderson who performed the surgery said: “We’ve shown you can decode intended speech by recording activity from a very small area on the brain’s surface.”

Mrs Bennett’s pace has begun to approach the 160-word-per-minute rate of natural conversation.

When the vocabulary was expanded to 125,000 words, which covers most of the English language, the error rate rose to 23.8 percent.

Whilst the team admits this is far from perfect, it is a giant step from prior attempts.

Dr. Frank Willett, who led some of the research, said: “This is a scientific proof of concept, not an actual device people can use in everyday life.

“But it’s a big advance toward restoring rapid communication to people with paralysis who can’t speak.”

Bennett wrote: “Imagine how different conducting everyday activities like shopping, attending appointments, ordering food, going into a bank, talking on a phone, expressing love or appreciation or even arguing will be when nonverbal people can communicate their thoughts in real-time.”

Produced in association with SWNS Talker

Edited by Judy J. Rotich and Newsdesk Manager

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.