Within the Faculty the Computers and Law Research group includes a number of significant grant-funded projects, as listed above, and opportunities to discuss and share research are provided by a variety of lively and engaging discussion groups, listed in the box to the right. We also listed some of the selected recent publications of the members below.  

In addition and jointly with the Faculty of Computer Science, we run a postgraduate course in Law and Computer Science which is open to law students taking the BCL, MJur and MLF, and Computer Science Students in their fourth or MSc years.


  • Humans as a Service cover
    J Adams-Prassl, Humans as a Service: The Promise and Perils of Work in the Gig Economy (OUP 2018)
    WHAT IF YOUR BOSS WAS AN ALGORITHM? The gig economy promises to revolutionise work as we know it, offering flexibility and independence instead of 9-to-5 drudgery. The potential benefits are enormous: consumers enjoy the convenience and affordability of on-demand work while micro-entrepreneurs turn to online platforms in search of their next gig, task, or ride. IS THIS THE FUTURE OF WORK? This book offers an engaging account of work in the gig economy across the world. Competing narratives abound: on-demand gigs offer entrepreneurial flexibility - or precarious work, strictly controlled by user ratings and algorithmic surveillance. Platforms' sophisticated technology is the product of disruptive innovation - whilst the underlying business model has existed for centuries. HOW CAN WE PROTECT CONSUMERS & WORKERS WITHOUT STIFLING INNOVATION? As courts and governments around the world begin to grapple with the gig economy, Humans as a Service explores the challenges of on-demand work, and explains how we can ensure decent working conditions, protect consumers, and foster innovation. Employment law plays a central role in levelling the playing field: gigs, tasks, and rides are work â and should be regulated as such.
    ISBN: 9780198797012
  • JA Armour and M Sako, 'AI-enabled business models in legal services: from traditional law firms to next-generation law companies?' (2020) 7 Journal of Professions and Organization 27
    DOI: doi.org/10.1093/jpo/joaa001
    What will happen to law firms and the legal profession when the use of artificial intelligence (AI) becomes prevalent in legal services? We address this question by considering three related levels of analysis: tasks, business models, and organizations. First, we review AI’s technical capabilities in relation to tasks, to identify contexts where it is likely to replace or augment humans. AI is capable of doing some, but not all, legal tasks better than lawyers and is augmented by multidisciplinary human inputs. Second, we identify new business models for creating value in legal services by applying AI. These differ from law firms’ traditional legal advisory business model, because they require technological (non-human) assets and multidisciplinary human inputs. Third, we analyze the organizational structure that complements the old and new business models: the professional partnership (P2) is well-adapted to delivering the legal advisory business model, but the centralized management, access to outside capital, and employee incentives offered by the corporate form appear better to complement the new AI-enabled business models. Some law firms are experimenting with pursuing new and old business models in parallel. However, differences in complements create conflicts when business models are combined. These conflicts are partially externalized via contracting and segregated and realigned via vertical integration. Our analysis suggests that law firm experimentation with aligning different business models to distinct organizational entities, along with ethical concerns, will affect the extent to which the legal profession will become ‘hybrid professionals’.
    ISBN: 2051-8811
  • Václav Janeček and R Williams, 'Education for the Provision of Technologically Enhanced Legal Services' (2021) Computer Law & Security Review 1
    DOI: https://doi.org/10.1016/j.clsr.2020.105519
    Legal professionals increasingly rely on digital technologies when they provide legal services. The most advanced technologies such as artificial intelligence (AI) promise great advancements of legal services, but lawyers are traditionally not educated in the field of digital technology and thus cannot fully unlock the potential of such technologies in their practice. In this paper, we identify five distinct skills and knowledge gaps that prevent lawyers from implementing AI and digital technology in the provision of legal services and suggest concrete models for education and training in this area. Our findings and recommendations are based on a series of semi-structured interviews, design and delivery of an experimental course in ‘Law and Computer Science’, and an analysis of the empirical data in view of wider debates in the literature concerning legal education and 21st century skills.
  • HA Abraha, 'Law enforcement access to electronic evidence across borders: mapping policy approaches and emerging reform initiatives' (2021) 29 International Journal of Law and Information Technology
    DOI: https://doi.org/10.1093/ijlit/eaab001
    With the ubiquity of cloud computing, criminal investigations today—including exclusively domestic ones—often require access to data across borders. However, the traditional system of cross-border legal cooperation—the Mutual Legal Assistance system—is ill-suited to this development. There is a growing consensus that this system is unsustainable and needs to be reformed or replaced with new alternatives. That is where the consensus ends, however. Despite the shared understanding of the problem and repeated calls for reform or replacement of the traditional system, there is little agreement on what these reforms or alternative approaches should look like. What one can witness instead is the proliferation of uncoordinated initiatives that could lead to further jurisdictional conflict and legal uncertainty. The purpose of the present contribution is to map and examine these various initiatives based on the approaches they follow in addressing the challenges in obtaining electronic evidence across borders—issues that are referred to broadly in this article as ‘cross-border data access [CBDA] problem’. It tries to answer two questions: what approaches can best explain the proliferation of initiatives aimed at improving law enforcement access to electronic evidence across borders? To what extent are these initiatives apt to address the CBDA problem? This article develops and distinguishes between four approaches—reformist, unilateralist, internationalist and nuanced—that can best explain the current and emerging initiatives. It then examines the suitability and sustainability of these approaches against their stated objectives and some key principles that have enjoyed extensive support in policy and academic discussions.
  • H Eidenmüller and F Varesis, 'What is an Arbitration? Artificial Intelligence and the Vanishing Human Arbitrator' (2020) 17 NYU Journal of Law and Business 49
    DOI: http://dx.doi.org/10.2139/ssrn.3629145
    Technological developments, especially digitization, artificial intelligence (AI), and blockchain technology, are currently disrupting the traditional format and conduct of arbitrations. Stakeholders in the arbitration market are exploring how new technologies and tools can be deployed to increase the efficiency and quality of the arbitration process. The COVID-19 pandemic is accelerating this trend. In this essay, we analyze the “Anatomy of an Arbitration”. We argue that, functionally, fully AI-powered arbitrations will be both technically feasible and should be permitted by the law at some point in the future. There is nothing in the concept of an arbitration that requires human control, governance, or even input. We further argue that the existing legal framework for international commercial arbitrations, the “New York Convention” (NYC) in particular, is capable of adapting to and accommodating fully AI-powered arbitrations. We anticipate significant regulatory competition between jurisdictions to promote technology-assisted or even fully AI-powered arbitrations, and we argue that this competition would be beneficial. In this competition, we expect that common law jurisdictions will enjoy an advantage: machine learning applications for legal decision-making can be developed more easily for jurisdictions in which case law plays a pivotal role.
  • R Williams, 'Rethinking Administrative Law for Algorithmic Decision Making' (2021) Oxford Journal of Legal Studies
    DOI: https://doi.org/10.1093/ojls/gqab032
    The increasing prevalence of algorithmic decision making (ADM) by public authorities raises a number of challenges for administrative law in the form of technical decisions about the necessary metrics for evaluating such systems, their opacity, the scalability of errors, their use of correlation as opposed to causation and so on. If administrative law is to provide the necessary guidance to enable optimal use of suych systems, there are a number of ways in which it will need to become more nuanced and advanced. However, if it is able to rise to this challenge, administrative law has the potential not only to do useful work itself in controlling ADM, but also to support the work of the Information Commissioner' Office and provide guidance on the interpretation of concepts such as 'meaningful information' and 'proportionality' within the General Data Protection Regulation.
  • R Williams, Richard Cloete, Jennifer Cobbe and Caitlin Cottrill and others, 'From transparency to accountability of intelligent systems: Moving beyond aspirations' (2022) Data & Policy
    DOI: https://doi.org/10.1017/dap.2021.37
    A number of governmental and nongovernmental organizations have have made significant efforts to encourage the development of artificial intelligence in line with a series of aspirational concepts such as transparency, interpretability, explainability, and accountability. The difficulty at present, however, is that these concepts exist at a fairly abstract level, whereas in order for them to have the tangible effects desired they need to become more concrete and specific. This article undertakes precisely this process of concretisation, mapping how the different concepts interrelate and what in particular they each require in order to move from being high-level aspirations to detailed and enforceable requirements. We argue that the key concept in this process is accountability, since unless an entity can be held accountable for compliance with the other concepts, and indeed more generally, those concepts cannot do the work required of them. There is a variety of taxonomies of accountability in the literature. However, at the core of each account appears to be a sense of “answerability”; a need to explain or to give an account. It is this ability to call an entity to account which provides the impetus for each of the other concepts and helps us to understand what they must each require.
  • J Adams-Prassl, Reuben Binns and Aislinn Kelly-Lyth, 'Directly Discriminatory Algorithms' (2022) Modern Law Review (forthcoming)
    Discriminatory bias in algorithmic systems is widely documented. How should the law respond? A broad consensus suggests approaching the issue principally through the lens of indirect discrimination, focusing on algorithmic systems’ impact. In this article, we set out to challenge this approach, arguing that while indirect discrimination law has an important role to play, a narrow focus on this regime in the context of machine learning algorithms is both normatively undesirable and legally flawed. We illustrate how certain forms of algorithmic bias in frequently deployed algorithms might constitute direct discrimination, and explore the ramifications—both in practical terms, and the broader challenges automated decision-making systems pose to the conceptual apparatus of anti-discrimination law.