Over the past few years, activists have raised the alarm about the growing use of facial recognition technology in the United States. Many say that in the hands of law enforcement agencies, the technology poses great surveillance concerns, especially amidst growing social movements. Recently, a government watchdog report confirmed that facial recognition was used to track George Floyd protesters by multiple agencies. Even this report, however, is barely touching the surface of facial recognition's use.
As a technology, facial recognition, a method of biometric identification, can't really be separated from surveillance. Day to day, you may use facial recognition to scan your face in photos or videos. But a map by digital rights advocacy group Fight for the Future shows just how widespread the practice is when you don't even realize it. If you're in the U.S., your face has likely been scanned dozens of times without you consenting to it.
With this in mind, most people already assumed that facial recognition was used to monitor protesters following the police murder of Floyd. In May, BuzzFeed News reported on the breadth of surveillance technologies that the Minneapolis Police Department alone could use to track protesters. That department, as well as the Hennepin County Sheriff's Office and the Minnesota Fusion Center, already had a history of using Clearview AI, which is perhaps best known for scraping the internet for billions of photos to use to sell its services to law enforcement.
So, the U.S. Government Accountability Office's 92-page report, which surveyed 42 federal agencies, isn't necessarily saying anything brand new. Per the GAO, six agencies self-reported using facial recognition on images from uprisings that followed Floyd's murder. Each agency said it used the technology from May through August 2020, when protests were at their height.
These agencies include no-brainers like the FBI, which had developed a tip line seeking digital media from the protests. Other culprits, however, may surprise you. For example, the report stated that the U.S. Park Police used an image from Twitter to charge someone with felony civil disorder and two counts of assault on a police officer. The U.S. Postal Inspection Service; the U.S. Marshals Service; the Bureau of Alcohol, Tobacco, Firearms, and Explosives; and the U.S. Capitol Police also all reported using the technology.
If that isn't frightening enough, the report further highlights what many activists have said before: that the use of facial recognition is poorly regulated. Per GAO, 14 agencies reported using third-party companies to conduct facial recognition services. Only one agency — Immigration and Customs Enforcement — could specifically name which service it was using.
"The other 13 agencies do not have complete, up-to-date information because they do not regularly track this information and have no mechanism in place to do so," the report said. In fact, the IRS's Criminal Investigation Division flat out told the GAO that it doesn't track non-federal systems that employees use "because it is not the owner of these technologies."
There are some issues with the report, though. Five agencies — the Capitol Police, the U.S. Probation Office, the Pentagon Force Protection Agency, the Transportation Security Administration, and the Criminal Investigation Division of the IRS — claimed that they didn't use Clearview AI between April 2018 and March 2020. On Tuesday, BuzzFeed News reported that data the outlet had previously reviewed showed otherwise. Per internal data, each of these agencies were among over 1,800 U.S. taxpayer-funded entities whose employees tried or used Clearview AI.
Os Keyes, a Ph.D. candidate at the University of Washington, told BuzzFeed News that the discrepancy "highlights the limits of the GAO, and who has power here." They continued, "I think it speaks to the fact that the GAO analysis ... is ultimately playing catch-up, and in a domain where … people are not documenting the technologies they use, the regulations they put around them, or the processes for accessing them."
To be clear, this doesn't mean that putting laws into place to regulate the use of facial recognition is the solution. First, the technology itself is known to be garbage; if you're not a white, cis man, well, good luck. In 2019, researchers testing popular services from Amazon, Clarifai, IBM, and Microsoft found the programs couldn't classify transgender or nonbinary people. Another 2019 study found that Amazon's service, Rekognition, often classified dark-skinned women as men. Researcher Joy Buolamwini had to literally put on a white mask for some software to even detect her face.
Second, even properly regulated facial recognition is fundamentally a surveillance technology. If you make it work better, that means you are just ensuring that law enforcement will have an easier time tracking down protesters and people involved in social movements. That's why it's important to note that law enforcement agencies have been allowed to use facial recognition basically however and whenever they want — because it alerts you to the dangerous reality that we are currently in.