Bugs in the system: technology experts and the new legal order
In a world where AI systems increasingly decide who receives a loan or a social benefit, who gets hired to a job, or who is flagged as a security risk, technological expertise becomes central to AI rule-making and enforcement. Dr Margarita Amaxopoulou, a Leverhulme Trust Early Career Research Fellow in Oxford’s Faculty of Law, is undertaking a major new research project that investigates a crucial issue: who gets to shape, write and enforce AI rules? And what is the impact of that on our legal systems?
Dr Amaxopoulou is interested in the emerging ‘credibility contests’ between different professional communities over authority and influence in the AI regulation space – particularly between legal experts and technology experts. These credibility contests matter, argues Dr Amaxopoulou, because they determine which voices are heard and prioritised when rules and regulatory frameworks are created and enforced.
Historically, lawyers have played a dominant role in shaping regulatory regimes, but the rise of AI has blurred traditional boundaries. Technology experts are increasingly involved in designing, interpreting and enforcing rules for AI. This rise in power and influence brings both opportunities and risks. “It’s not about who gets the job or who is seen as more influential by policymakers,” says Dr Amaxopoulou. “It’s about a potentially fundamental step-change in how legal systems work.”
She sees parallels between the current technological shift and earlier periods of professional transformation. In the United States, for example, economists took on an increasingly dominant role in policymaking in the mid-20th century, sometimes displacing legal perspectives on justice, fairness and equality. A similar trend is now unfolding, with technologists stepping into regulatory and advisory roles. Dr Amaxopoulou says:
Governments want innovation and economic growth for their countries so they often listen to those perceived as ‘innovators’. But what is prioritised and what is overlooked in this process – and at what cost?
The historically established position of lawyers in rule-making and implementation appears to be taking a back seat in the governance of AI, argues Dr Amaxopoulou. Through standardisation bodies, certification systems, and algorithmic compliance tools, it is technology experts who are creating new, legally significant technical norms. These norms may be formally recognised by laws such as the EU AI Act, or they may be non-legally binding – but very influential – points of reference for compliance purposes, such as the General-Purpose AI Code of Practice. However, technology experts in most cases are not legally trained, and often have limited or no knowledge of legal epistemologies or the domains applicable in AI contexts.
Dr Amaxopoulou’s Leverhulme-funded project explores how this shift in authority is taking place in the rapidly developing field of AI regulation in the UK and the EU – and what this shift means for the law. It looks at the mechanisms through which regulatory frameworks may facilitate or encourage this shift, and maps the legal and institutional mechanisms that give engineers formal and informal mandates to act as regulators (from government ‘assurance schemes’ to the incorporation of private technical standards into legislation). Her research explores how the law delegates or defers to technical expertise in AI regulation?
Dr Amaxopoulou is also interested in the feedback loop this phenomenon triggers: how does the increased importance and role of technical expertise in rule-creation and implementation impact the law? What happens to law’s claim to govern when the making and interpretation of norms migrate to technical domains? How do forms of mandated or de facto techno-regulation interact with the principle of accountability?
One central concern is how fundamental legal and democratic concepts such as fairness, equality or even the rule of law are interpreted and applied when shaped through a technical rather than a legal lens. Dr Amaxopoulou says: “For example, in engineering circles, democracy is often interpreted in majoritarian or utility-based terms – ‘if we are serving users, we are serving society’. But users are not society. That perspective ignores power imbalances, historical injustice, and the protection of vulnerable groups.”
More broadly, Dr Amaxopoulou’s work contributes to a growing field of legal research examining how digital technologies challenge traditional models of regulation. She is particularly concerned with ensuring that the development and deployment of AI remains consistent with democratic values and the rule of law. She says:
What fascinates me most is how AI is not just a technological shift – it’s a societal shift, and a deeply political one. It raises urgent questions about who holds power, who decides what is fair, and how we protect fundamental rights in a digital age.