Faculty of law blogs / UNIVERSITY OF OXFORD

The Problem of Algorithmic Corporate Misconduct

Author(s)

Mihailis E. Diamantis
Professor of law at the University of Iowa College of Law

Posted

Time to read

3 Minutes

Technology will soon force broad changes in how we conceive of corporate liability. The law’s doctrines for evaluating corporate misconduct date from a time when human beings ran corporations. Today, breakthroughs in artificial intelligence and big data allow automated systems to make many business decisions like which loans to approve, how high to set prices, and when to trade stock. As corporate operations become increasingly automated, algorithms will come to replace employees as the leading cause of corporate harm. The law is not equipped for this development. Rooted in an antiquated paradigm, the law presently identifies corporate misconduct with employee misconduct. If it continues to do so, the inevitable march of technological progress will increasingly immunize corporations from most civil and criminal liability.

In a forthcoming article, ‘The Extended Corporate Mind: When Corporations Use AI to Break the Law’, I spell out the challenge automation poses for corporate law. The root of the problem is that the law has nothing to say when automated systems are responsible for the ‘thinking’ behind corporate misconduct. Most civil and criminal corporate liability requires evidence of a deficient corporate mental state, like purpose to discriminate, knowledge of inside information, or agreement to fix prices. The primary doctrine for attributing mental states to corporations—respondeat superior—defines corporate mental states in terms of employee mental states. When corporations misbehave through their employees, respondeat superior offers relatively straightforward results. But when corporations use algorithms to misbehave, the liability inquiry quickly aborts. Algorithms are not employees, nor do they have independent mental states. So respondeat superior cannot apply. This is true even if, from the outside, a corporation acting through an algorithm looks like it is behaving just as purposefully or knowingly as a corporation that uses only employees.

The present state of the law is worrisome because corporate automation will grow exponentially over the coming years. This all but guarantees that corporations will escape accountability as their operations require less and less human intervention. Though algorithms promise to make corporations more efficient, they do not remove (or even always reduce) the possibility that things will go awry. The worry is concrete. Some current examples of corporate algorithmic harm that merit a searching liability inquiry include:

The incentive structure that current law sets out for corporations will accelerate the law’s obsolescence. Safe algorithms take years to program, train, and test. Their rollout should be piecemeal, with cautious pilots followed by patches and updates to address lessons learned. By shielding corporations from liability for many algorithmic harms, the law encourages corporations to be cavalier. Businesses keen to manage their liabilities will seek the safe haven of algorithmic misconduct rather than chance liability for misconduct by human employees. We should expect corporations to turn to algorithms prematurely, before underlying technology has been sufficiently tested for socially responsible use.

Fixing the problem of algorithmic corporate misconduct is not a simple matter of finding a nefarious corporate programmer and then applying respondeat superior to hold her employer liable. Certainly, there will be cases where an employee purposely or knowingly designs a corporate algorithm to break the law. In such scenarios, respondeat superior will suffice.  In most cases, though, no such employee will exist. Sometimes, employees may have been reckless or negligent in designing harmful algorithms. While respondeat superior may help for liability schemes that only require recklessness or negligence, many of the most significant corporate liability statutes require more demanding mental states like purpose or knowledge. Furthermore, algorithms will often produce harms even without employee recklessness or negligence. The most powerful algorithms literally teach themselves how to make decisions. This gives them the ability to solve problems in unanticipated ways, freeing them from the constraining foresight of human intelligence. One consequence of this valuable flexibility is that these algorithms can create harms even if all the humans involved are entirely innocent.

To plug the algorithmic liability loophole, the law needs a framework for extending its understanding of the corporate mind beyond the employees whose shoes algorithms are coming to fill. The ideal solution would find a way to treat corporations the same regardless of whether algorithms or employees are behind the wheel. To have a realistic prospect of persuading lawmakers, the solution should steer clear of science fictions like robot minds and algorithmic agency. In my forthcoming article, I propose a detailed doctrine that I think can do the work. The basic idea is that corporations that use algorithms to fulfill employee roles should be treated as having the same mental states as corporations that engage in the same patterns of behavior using employees. Legal parity between employee and algorithmic misconduct would remove the incentives the law presently gives corporations to rush toward automation. To be clear, corporate automation is inevitable and desirable. But we should not allow it to compromise our ability to hold corporations accountable when they break the law. 

Mihailis E. Diamantis is an Associate Professor of Law at the College of Law of the University of Iowa.

Share

With the support of