Get all your news in one place.
100’s of premium titles.
One app.
Start reading

Tempers flare over emotion-sensing AI

Software that uses machine learning to attempt to detect human emotions is emerging as the latest flashpoint in the debate over the use of artificial intelligence.

Why it matters: Proponents argue that such programs, when used narrowly, can help teachers, caregivers and even salespeople do their jobs better. Critics say the science is unsound and the use of the technology dangerous.


Driving the news: Though emotion-tracking technology has been evolving for a while, it's rapidly moving into broader use now, propelled in part by the pandemic-era spread of videoconferencing.

  • Startups are deploying it to help sales teams assess customers' responses to their pitches, and Zoom could be next, as Protocol reports.
  • Intel has been working with Classroom Technologies on education software that can give teachers a better sense of when students working online are struggling.

Between the lines: Critics have been sounding alarms over mood-detection tech for some time.

  • Spotify faced criticism last year after it had applied for a patent on a technique for determining a person's mood and gender based on their speech.

What they're saying: "Emotion AI is a severely flawed theoretical technology, based in racist pseudoscience, and companies trying to market it for sales, schools, or workplaces should just not," Fight for the Future's Caitlin Seeley George said in a statement to Axios.

  • "It relies on the assumption that all people use the same facial expressions, voice patterns, and body language. This assumption ends up discriminating against people of different cultures, different races, and different abilities."
  • "The trend of embedding pseudoscience into 'AI systems' is such a big one," says Timnit Gebru, the pioneering AI ethicist forced out of Google in December 2020. Her comments came in a tweet last week critical of claims by Uniphore that its technology could examine an array of photos and accurately classify the emotions represented.

The other side: Those working on the technology say that it's still in its early stages but can be a valuable tool if applied only to very specific cases and sold only to companies who agree to limit its use.

  • With sufficient constraints and safeguards, proponents say the technology can help computer systems better respond to humans. It's already working, for example, to help users of automated call systems get transferred to a human operator when the software detects anger or frustration.
  • Intel, whose researchers are studying how emotion-detecting algorithms could help teachers better know which students might be struggling, defended its practices and said the technology is "rooted in social science."
  • "Our multi-disciplinary research team works with students, teachers, parents and other stakeholders in education to explore how human-AI collaboration in education can help support individual learners’ needs, provide more personalized experiences and improve learning outcomes," the company said in a statement to Axios.

Yes, but: Even some who are actively working on the technology worry how others could misuse it.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.