The algorithmic boss: protecting human agency in an age of automated management

Line manager, or line of code? When your boss is a computer program, questions of fairness, transparency and human agency become increasingly urgent.

It may sound like dystopian science fiction, but algorithmic management – the use of automated systems to hire, supervise and even dismiss workers – is already transforming how organisations operate. What began in the gig economy (think ride-share drivers and food delivery couriers) has quietly spread into a host of professional spaces. “The big fear was always about AI replacing jobs on the ground,” says Jeremias Adams-Prassl, Professor of Law at Oxford and Fellow of Magdalen College. “But what we’re seeing is the automation of management itself.”

Professor Adams-Prassl has been examining these developments for over a decade. His early research into the gig economy – culminating in the award-winning book Humans as a Service  challenged the idea that companies such as Uber or Deliveroo represented something entirely new. “From a legal perspective, there was nothing novel there,” he explains. “Some employers have long tried to disguise their workforce as self-employed. What was new was the technology – the way apps were used to control workers in extremely tight ways.”

That insight led to his European Research Council-funded project, iManage, which explores how employment law can adapt to algorithmic management. The project asks a deceptively simple question: when human decisions are replaced by algorithmic ones, how can the law keep up?

The GDPR currently prohibits fully automated decision-making that significantly affects individuals, such as hiring or firing. But the UK has recently repealed that safeguard. “As of January 2026, we will no longer have a general right to a human decision in the UK,” says Professor Adams-Prassl. “So we need to think about alternative protection mechanisms.”

His current research suggests a shift in focus. “Having a human in the loop might be a good start – but often it won’t work,” he says. “It doesn’t necessarily protect us from bias or error. 

Instead, we need to think about the broader system – what I have termed the humans before, after and around the loop. How can the law create real points of agency, contestation and accountability in an automated environment?

Colleagues writing computer code on computers

The iManage team (including Aislin Kelly-Lyth, Halefom Abraha, Six Silberman and Rakshita Sangh) was recently recognised in Oxford's Social Sciences Impact Awards for its work to produce an algorithmic management blueprint – several aspects of which have been adopted in the EU’s Platform Work Directive, granting new rights to a staggering 43 million gig economy workers. “It was exciting to see research feed directly into policy,” says Professor Adams-Prassl. “We wanted to show that existing employment and discrimination laws already cover many of these issues, but that we also need new rules to preserve worker voice and agency.”

Algorithmic management is no longer confined to keeping tabs on delivery drivers or warehouse staff. “It’s built into the software many of us use every day,” says Professor Adams-Prassl. “Tools we use in the office to communicate and conduct meetings have surveillance and performance-tracking capabilities built in. The question is not whether the technology exists, but whether employers should use it – and under what safeguards. There may be some positive use cases around health and safety, for example, but does that justify the intrusion? We need a much more nuanced conversation about responsible use.”

Examples of harm are already well documented: recruitment algorithms have been shown to reproduce gender bias, facial recognition systems to be less accurate for darker skin tones, and automated productivity tools to misinterpret breaks as inefficiency. For Professor Adams-Prassl, these illustrate a larger issue. 

People may think that because AI is new, the law doesn’t apply. But that’s a very convenient myth. The starting point must be that existing rules – including those on discrimination, privacy and fairness – still hold. Only once we’ve applied those can we identify the true gaps that need new regulation.

Across his research, Professor Adams-Prassl has been drawn to a recurring “golden thread”: the relationship between complexity and responsibility. His doctoral work examined how corporations use legal structures to obscure control, but algorithmic management, he argues, introduces a new kind of complexity. “When the complexity is inherent in the technology rather than a legal strategy, it becomes much harder for the law to respond,” he says. “That’s why protecting human agency is so central. We need to ensure that people retain meaningful control and recourse in systems that are increasingly technologically sophisticated and opaque. The challenge – and the opportunity – is to make law and technology evolve together.”

On this page