Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Top News
Top News

AI Facial Recognition Error Causes Wrongful Arrest in Detroit

Police utilize AI facial recognition for crime suspect identification, causing potential racial biases.

Detroit resident Portia Woodruff found herself in an unimaginable situation when police officers knocked on her door, accusing her of carjacking and robbery. Pregnant and in the presence of her children, Woodruff was handcuffed and spent 11 hours in jail for a crime she didn't commit. She later discovered that her wrongful arrest was due to a mistake by artificial intelligence (AI) facial recognition software which had mistakenly matched her mugshot to a video of the suspicious activity.

Detroit's police chief defended the technology, instead placing blame on the officers handling the case, stating that poor investigative work resulted in the error. However, many AI experts such as Georgia Tech professor and researcher Matthew Gombele disagree, arguing that the AI software is not yet refined enough for reliable deployment in police departments.

The potential margin of error in AI facial recognition can lead to significant consequences, especially given that it's estimated by Georgetown law that one in two American adults are unknowingly in facial recognition networks. While the accuracy of these systems seems promising at small scales, when implemented across a population of 300 million, the margin for errors magnifies, potentially leading to serious real-life repercussions for innocent individuals.

A 2019 U.S. Government study claimed that the best-performing facial recognition algorithms incorrectly identified black and Asian faces 10 to 100 times more frequently than white faces. This issue is exacerbated by the fact that predominantly white software developers often use white faces as training data, leading to the software accurately identifying white faces far more frequently.

Moreover, due to overrepresentation of black individuals in mugshot databases, the software is statistically more likely to identify a person suspect if they are black. This introduces a racial bias in the AI's functionality. Calvin Lawrence, who has been involved in AI since the 90s, supports the technology but acknowledges its flaws, stating, 'Absolutely. I stand by that. But recently, I think I wrote the book out of guilt.'

In light of these glaring racially biased discrepancies, AI experts are urging the public to push lawmakers and law enforcement to properly train these systems and prevent further unjust arrests.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.