Interviews

Why do Western governments delegate border control to AI more and more?

by Filip Noubel, Global Voices

Activists estimate that in 2022, 30 million people were on the move as refugees, many of whom attempted to seek protection in the US and the European Union. But what they often experience when entering Western countries is not protection, but rather a dehumanizing process of categorization that relies heavily on AI and unchecked technology.

Global Voices conducted an email interview with Petra Molnar, a lawyer and anthropologist specializing in technology, migration, and human rights. Molnar is the co-creator of the Migration and Technology Monitor, a collective of civil society, journalists, academics, and filmmakers interrogating technological experiments on people crossing borders. She is also the Associate Director of the Refugee Law Lab at York University and a Fellow at the Berkman Klein Centre for Critical Internet at Harvard University. She is currently writing her first book, “Artificial Borders” (The New Press, 2024).

Filip Noubel (FN): Your research shows that refugee and detention camps — often spaces of lawlessness — serve to test new technologies. Could you give some examples? 

Petra Molnar (PM) Borders and spaces of humanitarian emergency like refugee camps are increasingly testing out new technologies for migration control. Since 2018, I have been spending time with people who are at the sharpest edges of technological innovation. From the Arizona desert at the US/Mexico border to the Kenya-Somalia border, to various refugee camps in the EU, we have seen first-hand the impacts of border surveillance and automation on people’s lives.

Before you even cross a border, you may be subject to predictive analytics used in humanitarian settings or biometric data collection. At the border, you can see drone surveillance, sound-cannons, and thermal cameras. If you are in a European refugee camp, you will interact with algorithmic motion detection software. You may be subject to projects like voice printing technologies and the scraping of your social media records. Borders themselves are also changing, as surveillance is expanding our understanding of the European border beyond its physical limits, creating a surveillance dragnet as far as north and Sub-Saharan Africa and the Middle East.

These experimental and high-risk technologies occur in an environment where technological solutions are presented as a viable solution to complex social issues, creating the perfect ecosystem for a lucrative ecosystem giving rise to a multi-billion euro border industrial complex.

In my fieldwork, I notice that people share feelings of being constantly surveilled, or being reduced to data points and fingerprints. Many point out how strange it seems that vast amounts of money are being poured into high-risk technologies while they cannot get access to a lawyer or have psychosocial support. There is also a central misapprehension at the centre of many border tech projects – that somehow more technology will stop people from coming. But that is not the case, in fact people will be forced to take more dangerous routes, leading to even more loss of life at the world’s borders.

FN: Innovation is often presented as a positive term, yet certain tech companies are involved in testing new technologies on refugees. Why do certain governments allow this?

PM: The creation of legal black holes in migration management technologies is deliberate to allow for the creation of opaque zones of technological experimentation that would not be allowed to occur in other spaces. Why does the private sector get to determine what we innovate on and why, in often problematic public-private partnerships which states are increasingly keen to make in today’s global AI arms race? Private companies like Palantir Technologies, Airbus, Thalys, and others that have links to a host of human rights abuses have now become the preferred vendor for various governments and are even working with international organizations like the World Food Program.

FN: Documenting violation is in itself a huge challenge. Can you explain why?

PM: Trying to document these systems of technological oppression is itself a risky business – one fraught with trying to unravel opaque decisions, secretive private sector players, and witnessing horrific conditions on the frontiers challenging our common humanity. It’s also about asking broader questions. Are human rights framings enough or do they also silence the systemic and collective nature of these harms? And are we doing enough to create space for abolitionist conversations when it comes to technology at the border?

In order to tell this global story of power, violence, innovation, and contestation, I rely on the sometimes-uneasy mix between law and anthropology. It’s a slow and trauma-informed ethnographic methodology, one which requires years of being present to begin unravelling the strands of power and privilege, story and memory that makes up the spaces where people’s lives unfold.

Technology replicates power structures in society. Unfortunately, the viewpoints of those most affected are routinely excluded from the discussion. We also need to recognize that the use of technology is never neutral. It is a political exercise which highlights how the allure of quick fixes and the hubris of innovation does not address the systemic and historical reasons why people are marginalized and why they are forced to migrate in the first place.

FN: How can we push back? 

PM: At the Refugee Law Lab, we are trying to both shine a light on the human rights abuses in the use of technology at the border, as well as to look at technical solutions to these complex problems.

One of the main issues is that little to no regulation exists to govern the development of high-risk border tech. When things go wrong with these high risk experiments, where does responsibility and liability lie – with the designer of the technology, its coder, the immigration officer, or the algorithm itself? Should algorithms have legal personality in a court of law, not unlike a corporation? It is paramount that we begin to answer these questions, since much of the decision-making related to immigration and refugee decisions already sits at an uncomfortable legal nexus: the impact on the rights of individuals is significant and life-changing, even where procedural safeguards are weak.

The EU’s proposed AI Act is a promising step as it will be the first regional attempt in the world to regulate AI. However currently the act does not go far enough to adequately protect people-on-the-move. A moratorium or a ban on high-risk border technologies, like robo-dogs, AI lie detectors, and predictive analytics used for border interdictions is a necessary step in the global conversation. Academia also plays an important role in legitimizing high risk experimental technologies. We also need more transparency and accountability around border tech experiments, and people with lived experiences of migration must be foregrounded in any discussions.

Because in the end, it is not really about technology. What we are talking about is power – and the power differentials between actors like states and the private sector who decide on experimental projects, and the communities who become the testing grounds in this high-risk laboratory. For example, whose priorities really matter when we choose to create violent sound cannons or AI-powered lie detectors at the border instead of using AI to identify racist border guards?

 

Source
Global Voices
Show More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button