Petra Molnar is an anthropologist and attorney focused on human rights and migration. Molnar, who is based in Toronto, serves as the associate director of York University’s Refugee Law Lab and as a faculty associate at Harvard University’s Berkman Klein Center for Internet and Society. She has worked on migrant justice causes since 2008, first assisting directly with families resettling in Canada and now as a lawyer and researcher. She is the author of The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence, a book published by The New Press in May exploring surveillance technology along borders across the world, including at the U.S.-Mexico divide. Molnar spoke with the Texas Observer about surveillance tech and borders as a testing ground.
TO: Could you tell me a little bit about the different places you visited for your new book?
This book first started in Canada, but then I ended up traveling and living in Greece for three years, trying to understand how different refugee camps are getting more technology and how biometrics are used. Then I also did some work at the Polish-Belarusian border and other parts of Europe, but I broadened it out to trying to understand the kind of data colonialism in the Kenya-Somalia border, in the occupied Palestinian territories, and also the U.S.-Mexico border. Specifically, I’ve been working in Arizona out of Tucson, but then also in places like Nogales and Sasabe, and working closely with search-and-rescue groups there.
What is data colonialism?
That’s one of the underpinnings behind this whole story—the fact that our world is built on data now. An amazing colleague of mine, Mariam Jamal, a digital rights activist in Kenya, had this great phrase—“Data is the new oil.” That is precisely what we’ve been seeing. The fact that Western nations like the United States, Canada, Europe, need a lot of data subjects to power the way that technology is developed and deployed, it kind of replicates colonial power. So countries on the African continent or in the Middle East end up being subjects on whom technologies are tested or data is extracted from.
What does that testing of technology look like on the U.S.-Mexico border?
The U.S.-Mexico border is an interesting case study because it is one of the crucial sites where smart border tech is being tested out. The border itself is already a really interesting and an important place to look at, because legally speaking, it’s very opaque, very discretionary. Officers can make all sorts of decisions. This is the kind of zone where new technologies of surveillance are being tested without public scrutiny, accountability, or even knowledge. We’re talking about traditional surveillance, like drones, cameras, sensors in the ground, but also draconian projects like the robot dogs that were announced in 2022 by the Department of Homeland Security that are now kind of joining the global arsenal of migration management tech.
What happens at the border is this kind of laboratory where things are tested out and then it proliferates into other spaces—even with these robot dogs. A year after they were announced, the New York City Police Department proudly unveiled that they’re going to be using robo-dogs on the streets of New York. One even had black spots on it, like a Dalmatian.
You said that there isn’t a lot of oversight of border tech. Could you talk about that more in the context of technologies being tested at the U.S.-Mexico border?
One project that comes to mind is the CBPOne facial recognition application that has been rolled out the last few years for the purposes of what officials say is streamlining the system. If a person arrives, they have to download this application that uses facial recognition technology, data collection, etc. on their phone to then be able to enter the system and get an appointment.
It sounds like an application on paper, but people have been documenting its discriminatory effects on people with darker skin. It crashes people’s phones. People don’t know where their data is going.
So much of these technologies are rolled out without any kind of discussion. It’s unclear what kind of human rights impact assessments have been done. Have they talked to human rights lawyers or refugee lawyers about what is actually needed on the border? It again highlights that the border is like a free-for-all, this frontier zone that is a perfect laboratory for tech experimentation, because it’s hard to know what happens.
For you as a journalist, for me, as a human rights lawyer, we find out about things after the fact, or once they’ve already been rolled out. There isn’t this commitment to oversight and accountability in these spaces at all—because there doesn’t have to be.
Could you talk more about how surveillance plays into what some people call the border-industrial complex? Who are the major players, and who benefits?
That’s such an important piece to the puzzle here—the proliferation of what people have called a very lucrative global border-industrial complex. We’re talking billions of dollars being spent on border technologies and also military grade technologies that are then repurposed for the border, like the robot dogs. The private sector is a major player in this whole story, because they’re the ones who set the agenda on what we innovate on, and why—especially if there’s money to be made in this kind of securitization of the border.
It’s no accident that we’re developing robo-dogs, AI lie detectors and surveillance to test out on people crossing borders and not using AI to audit immigration decision making or root out racist border guards. That’s a particular set of decisions that a powerful set of actors is making, because there’s a bottom line to meet, and money to be made.
There’s the kind of companies that maybe readers are aware of: Palantir, Cellebrite, and Elbit Systems, an Israeli company that has put up surveillance towers in the [Sonoran Desert] that were first tested out in Palestine. But there’s also a whole host of other tiny and medium-sized companies that proliferate in this space as well. From a legal perspective, we’re also dealing with a complication where, when you have these public-private partnerships, a public entity and a private entity operate in different legal spheres. A company that develops a product and sells it off to a state agency can say, “It’s not our problem that people are being hurt by it because we just developed the product. We’re not the ones using it.”
On the other hand, the public sector can say, well, “We didn’t develop it.” This is the private sector problem. Then you end up with this kind of vacuum in the middle where people’s rights are being violated, but the responsibility isn’t exactly clear. Who’s actually responsible for when things go wrong? There’s no incentive to regulate this technology if you make a lot of money out of it; that’s really the bottom line here.
Does surveillance in border communities along the U.S.-Mexico border impact people in the interior of the country?
This tech doesn’t just stay at the border. Not only does it then become normalized and used in other areas of public life, like the robot dogs now patrolling streets of New York City, but also there are things like facial recognition in public spaces—including in sports stadiums and surveillance of protesters.
A lot of this technology is first developed and deployed for border purposes, normalized and then repurposed in other spaces. There’s also surveillance that happens inland, of course. There’s all sorts of license plate reader technology, different types of facial recognition tools, carceral technologies, that are used both in the criminal justice system and in the immigration detention system. It is this kind of surveillance dragnet that extends further and further inland and ensnares entire communities.
Could you talk a little bit about how surveillance tech plays a role in the so-called externalization of borders from the Global North to the Global South? How does that apply at the U.S.-Mexico border?
Externalization is a really important piece to this puzzle, too. This is the phenomenon where the border stops being a physical location, and then it is extended further, kind of disaggregated from its actual physical location—not only vertically into the skies through drones and surveillance but also horizontally. The U.S. has for years been pushing its border farther and farther south. The whole logic behind this right is that if a country can prevent people from even reaching its territory, then the job is done, right? If the whole point is to strengthen borders or close the borders, then externalization does that job for you because people can’t even arrive in your territory.
The tension here is a lot of Western states like to present themselves as being very like human rights forward. They are the ones who ratified and signed all the agreements like the Refugee Convention. But in order for that to work, the international refugee protection regime has to allow for people to be able to leave their country and arrive in a country of refuge where they can then claim asylum. If you close the border and then you push the border away to make it even more difficult for people to come, that actually infringes on this fundamentally protected right to asylum. That is illegal under international refugee law. The U.S. is a signatory to the Refugee Convention, and closing a border and preventing people from seeking asylum is in direct contravention of these principles and laws that supposedly the U.S. holds.
What do you foresee for the future of border surveillance, for borderlands across the world?
I’ll give you the pessimistic answer first, and then the optimistic one. I think the trend is more surveillance, sharper technology, insufficient regulation. This past time period was a really crucial one. For example, the European Union put out its big AI Act to regulate artificial intelligence. There’s talks about regulation at the U.S. level, Canada, other countries—but a lot of these instruments are very weak. When it comes to border surveillance, some of us were hoping that there would be some really strict guidelines and maybe even bans or moratoria on some of the really draconian technology. But unfortunately, that’s not the case. There’s a lot of money to be made. The likelihood is that there’s no incentive to regulate. The incentive is to create more technology, more algorithms, more AI.
The optimist in me, though, has seen that there are more and more conversations being had that are also led by affected communities about what this is really doing on the ground—and finding ways to kind of break through these silos that we all work in and find common ground and say, “No, this is not the society we want to live in.” We want to actually have a world that is not led by technocrats or the private sector, but [we want to] actually maybe use technologies to empower communities for psychosocial support, social support, support for information sharing, and really push governments to think about the human impact of this. I do see that trend as well.
This interview has been edited for length and clarity.