
Hello and welcome to Eye on AI. In today’s edition…Police are running wild with flawed facial recognition systems and no one is stopping them; Biden signs another AI executive order; Lawsuit documents show how Meta planned to train models on copyrighted material—and cover it up; and OpenAI trades newsroom funding for content.
In last Thursday's newsletter, I posed the question: If AI tools are misused, when should we blame the companies building them? Today’s story deals with a situation where we have no one to blame but ourselves. Or more specifically, our institutions.
An investigation published this week in the Washington Post details how police across the U.S. are using flawed AI facial recognition systems to wrongfully arrest and jail Americans. According to the report, 15 police departments across 12 states ignored municipal standards and arrested suspects identified through facial recognition without other evidence. In most cases, they also failed to gather basic information like alibis and ignored contradictory evidence, even DNA and fingerprints that pointed to other suspects. The report features the stories of eight people who have been wrongfully arrested based solely on facial recognition, but notes that the total number of people affected is impossible to know.
This is a case where yes, the technology fails. We know it does. And that’s why the even bigger failure is by police who rely on it anyway and the lawmakers and government agencies that let them.
The flaws of facial recognition—and lazy policing
A consistent stream of studies have shown that facial recognition systems have higher error rates for Black and dark-skinned people compared to white people. One of the most comprehensive—a 2019 NIST study that evaluated 189 facial recognition algorithms from 99 developers—found that Black and Asian people were up to 100 times more likely to be misidentified by the technology than white men. AI technology has improved over the years, but that’s a significant bias, and there’s no evidence that it’s been adequately improved.
Aside from the algorithms themselves, the datasets they’re matched against present biases, too. A 2020 ACLU study found that since Black people are more likely to be arrested for minor crimes than white people, their faces are more likely to be in mugshot databases used in facial recognition searches, and thus more likely to be scanned and selected by facial recognition algorithms. Additionally, it found that police surveillance cameras are disproportionately installed in Black and brown neighborhoods, further increasing the chances they’ll be wrongfully matched. Of the eight confirmed stories of wrongful arrests detailed in the Post investigation, seven of the people are Black.
The current issues with the use of facial recognition in policing go far beyond these biases, however. The Post found that some law enforcement officers using the technology appeared to “abandon traditional policing standards and treat software suggestions as facts.” In some instances, they referred to uncorroborated AI results as a “100% match” or stated that it “immediately and unquestionably” identified the perpetrator. Across the eight cases of wrongful arrest, the police neglected to check alibis in six of them, failed to collect key pieces of evidence in five, ignored suspects’ physical characteristics in three, and ignored contradictory evidence in two. They also used facial recognition results to lead witnesses into problematic statements and identification of victims. In one case, they even ignored DNA evidence and fingerprints that pointed to another suspect.
In many cases, the standards around how law enforcement officials use facial recognition are minimal—in others, they are not enforced. Most police departments are not required to report when the technology is used, and few keep records around the use of facial recognition. While many departments bar officers from arresting suspects based solely on facial recognition matches, many do so anyway, the Post found. That’s where lawmakers and government agencies that oversee police come in.
A lack of laws and accountability
Many of the earliest laws addressing AI were city and state laws banning the use of facial recognition in policing. Yet some—like Virginia and New Orleans—have already reversed these bans. In other cases, police officers simply find ways to skirt the laws, like asking police in other jurisdictions or agencies to run searches for them. They also conceal their use of facial recognition. Even when use of facial recognition leads to an arrest, it’s not presented as evidence in court because they consider it an investigative lead, not proof of guilt. Often, defendants have no idea they were identified by AI because prosecutors are not required to tell them.
Many privacy and civil rights advocates argue that facial recognition should not be used in policing. At the very least, there should be stringent laws around how it’s used, such as mandating that the use of facial recognition is always disclosed and that there is sufficient evidence for an arrest even without the facial recognition results. The U.S. has not passed any federal laws regulating AI, let alone any addressing facial recognition. The EU AI act, enacted last year, was set to ban the use of facial recognition in law enforcement, but a last-minute change has legalized its use after all, only banning it from use on live surveillance footage.
Even more importantly, facial recognition is far from the only AI technology police are adopting. As I covered in Eye on AI last year, they’re also using voice analysis algorithms that claim to detect fraud risks based on speech, AI systems that connect every camera in a town into one hub, and tools that can purportedly aggregate and analyze data to generate insights, including data from social media, jail systems, video feeds, and court records. Without laws regulating their use and clear transparency and accountability, we should assume law enforcement will misuse these tools too.
And with that, here’s more AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com