Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Zenger
Zenger
Environment
Mark Waghorn

How Do Our Brains Process Speech In Noisy Environments?

(Illustration via SWNS)

Our brains process speech differently when people are talking in a crowded room, according to new research.

(Illustration via SWNS)

Neurons light up when some voices are being drowned out by a cacophony of sounds.

It explains why we struggle to keep track of more than one conversation at a dinner party.

And the finding could lead to the development of better hearing aids, say scientists.

Lead author Vinay Raghavan, a Ph.D. student at Columbia University in New York, said, “When listening to someone in a noisy place, your brain recovers what you missed when the background noise is too loud.

“Your brain can also catch bits of speech you aren’t focused on, but only when the person you’re listening to is quiet in comparison.”

The study in the journal PLOS Biology used a combination of neural recordings and computer modeling.

It sheds light on how concentrating on a family member’s conversation becomes difficult when the television is on.

Focusing is hard – especially when other voices are noisier.

But amplifying all sounds equally does little to improve the ability to isolate these hard-to-hear voices.

(Photo by Andrea Piacquadio via Pexels)

Hearing aids that try to only amplify particular voices are still too inaccurate for practical use.

In order to gain a better understanding, the U.S. team recorded neural activity from brain implants of patients with epilepsy as they underwent a surgery.

Participants were asked to attend to a single voice – which was sometimes louder or quieter than “glimpsed” or “masked” others, respectively.

Recordings helped generate predictive models of brain activity. They showed sound information of “glimpsed” speech was encoded in both the primary and secondary auditory cortex of the brain. The attended speech was enhanced in the secondary cortex.

But “masked” speech was only encoded if it was the attended voice – and occurred later than for “glimpsed” speech.

Focusing on deciphering only the ‘masked’ portion opens the door to improved auditory attention-decoding systems for brain-controlled hearing aids.

Co-lead author Dr. Nima Mesgarani said, “The results shown hear present a novel understanding of speech perception in a multitasker environment.

“They can provide useful insight into the difficulties faced by hearing-impaired listeners.

“They present new possibilities for developing assistive neurotechnology to aid perception.”

She added, “These findings suggest separate mechanisms for emcoding glimpsed and masked speech and provide neural evidence for the glimpsing model of speech perception.”

 

Produced in association with SWNS Talker

Edited by Kyana Jeanin Rubinfeld and Jessi Rexroad Shull

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.