Guest post by Samuel Singler. Samuel is an MPhil International Relations candidate at St Antony’s College, Oxford. His current research critically examines the use of large-scale information systems and automated risk management technologies at borders, including a case study of recent developments in the field of EU external border controls.
An EU-funded border control pilot project, iBorderCtrl, to be trialed in Greece, Hungary, and Latvia, recently made headlines, attracting both praise and criticism. The project involves the deployment of Artificial Intelligence-based, computer-animated border agents to conduct lie-detector tests at the external borders of the EU. Despite the novelty of passengers directly interacting with virtual border agents, iBorderCtrl is in fact representative of a broader trend of technologizing border controls through the deployment of automated security technologies. The deployment of these systems to collect and analyze vast quantities of personal data is not simply envisaged as a tool for border management, but also as a key component of contemporary transnational surveillance and security practices carried out within and beyond those borders.
So, aside from their potential strengths or limitations, what do these border control technologies tell us about crime, control, and justice in the twenty-first century? And, importantly, what might we gain from taking a deeper look at the technologies themselves, in addition to the legal and political environment in which they are deployed? Drawing on recent fieldwork conducted at EU institutions in Brussels, this post explores these questions.
High-tech surveillance technologies at the border have often been understood as tools within a more general turn towards the criminalization of migration, with invaluable existing scholarship analyzing their impact on issues such as citizenship and global segregation. New technologies of surveillance have also been more broadly conceptualized as a component of neo-liberalism in the Global North, in the context of which novel technological solutions ‘enable’ or ‘facilitate’ the pursuit of a neo-liberal logic of managing uncertainty.
It is, of course, true that technological developments expand the boundary of possibilities for human action. However, accounts of technological ‘facilitation’ of existing political rationales miss the independent effect that technologies can have in shaping and stabilizing our understandings of what is politically desirable. For example, in his historical study of cartographic technologies and legal treaties, Jordan Branch found that linear borders – borders as lines on a map outlining exclusive, non-overlapping political authority – originated in the commercial mapmaking practices of private individuals, centuries before linear territoriality became the dominant way of conceptualizing political authority. The availability of linear maps shaped the imaginations of political leaders and provided a way to claim exclusive control over colonies in the New World. These practices were slowly mirrored back into Europe, where they transformed conceptions of political authority, by delegitimizing overlapping authority between the Church, the Emperor, Kings, and local authorities, and legitimizing instead the exclusive territorial authority of the state that became dominant by the early 19th century.
Similar observations might now be made about the potential effects of Big Data analytics and AI-based technologies. In interviews conducted with senior EU officials working on large-scale IT systems for border controls and security, it became apparent that the perceived need for new systems often arose from the establishment of previous ones. A lack of information previously regarded as irrelevant or insignificant thereby became reframed as an ‘information gap’ that must be filled by the creation of yet another new system. Just as the development of maps depicting political authority as linearly bounded reshaped understandings of space, so too has a growing network of data-driven information systems reshaped understandings of non-knowledge. Such justifications are further supported by the machine learning logics of the technologies themselves, according to which more data, however ‘relevant’ traditionally speaking, will allow law enforcement agencies to predict transgressions before they are committed and pre-empt them accordingly. Indeed, an EU official more critical of these new systems commented that proposals are constructed based on what is technologically feasible, while legal and political justifications are thought up retroactively: ‘The direction is more and more data. […] They [the European Commission] come up with an idea, and then they justify it. The reasons and examples in the proposals are not explained or supported by evidence.’
Further examples of technological imperatives shaping security policy are found in justifications given for law enforcement access to new border control databases. For instance, the 2013 proposal to establish an Entry-Exit System (EES) stipulated a two-year evaluation period during which the need for law enforcement access should be clearly demonstrated. However, in the 2017 final regulation access is provided from the outset. Despite a lack of evidence to demonstrate the need for immediate and unconstrained access to this data, a key argument employed by the Commission to justify this measure to the European Parliament was simply that access was already given to data in the Visa Information System and EURODAC, and disallowing access to the EES would undermine the ability of the network of information systems to function properly. EU officials explained that for the Commission, the goal here has been the smoother operation of the network of databases: ‘I mean, it’s better to have the double, two tools, two different systems, two different things [migration and security]. If you can unite them in one tool, that makes it easier, then it’s more efficient, that’s fine.’
These examples demonstrate the ways in which technological imperatives shape understandings of the problems at hand, and how technological arguments can be enrolled into these debates to justify more intrusive surveillance and policing of migrants. Although the outcome in terms of further criminalizing migration is similar in either case, an analysis focusing on the political context and legal texts while sidelining the role of the technologies themselves risks making false assumptions concerning the drivers of this process.
These insights are important also when considering the future of border controls and surveillance technologies. As recognized even by the European Commission itself, technologies first deployed at borders or in exceptional circumstances are likely to become normalized among the general population as well. This point was touched upon by a senior EU official, who expressed concern at the possibility of large-scale surveillance infrastructures diffusing inwards from the border to monitor and control EU citizens as well. Despite personally viewing this outcome as undesirable, and arguing it is still politically unpalatable, the official thought that eventual diffusion and ‘function creep’ were inevitably built into the operational logics of these technologies: ‘I personally think that we should not go there. I think for me that’s one step too many. […] It’s extremely sensitive, of course, but I think it will come. I think it will come.'
An appreciation of the ways technological innovation shapes political understandings can also inform new, more effective forms of theoretical and normative critiques of ‘crimmigration’ law and security practices. What Jef Huysmans calls the ‘extitutional’ character of contemporary surveillance technologies means that no single entity controls a unitary surveillance apparatus. Rather, contemporary surveillance networks are the result of complex interactions between public and private actors – practices that are both imposed and willingly opted into – and the development and diffusion of new technologies across societies due to a variety of motivations (commercial profit, social desirability, access to goods and services, security, etc.) which often overlap and spill over into other domains. As Kevin Haggerty and Richard Ericson observe, criticism or prohibition of any particular technology or security agency is therefore unlikely to produce significant political change. Critiques should therefore address the overall assemblages of surveillance and criminal control within which any particular technologies are situated and aim to uncover the ways in which everyday practices contribute to them.
Technologically-focused research into border controls will allow for productive interdisciplinary engagement between criminology and other fields of study, such as Science and Technology Studies, International Relations, and International Political Sociology. If indeed technological feasibility as such determines the political desirability of increasingly intrusive tools of surveillance and border controls, this profoundly impacts how we should understand the politics of migration, security, and crime. Hence, the study of political and social contexts should be supported by an analysis of technological developments when analyzing and predicting trends in the criminalization of migration.
How to cite this blog post (Harvard style)
Singler, S. (2018) The Role of Technology in the Criminalization of Migration. Available at: https://www.law.ox.ac.uk/research-subject-groups/centre-criminology/centreborder-criminologies/blog/2018/11/role-technology (Accessed [date]).