Faculty of law blogs / UNIVERSITY OF OXFORD

Who Pays for AI Injury?

Author(s)

Mihailis E. Diamantis
Professor of law at the University of Iowa College of Law

Posted

Time to read

4 Minutes

Robots and algorithms injure people every day. When such injuries occur, it is not always clear—as a matter of law or equity—who should pay. Since algorithms are not people and many of the most important algorithms are not products, the harms they cause do not always fit cleanly into the law’s liability regimes. This can foreclose justice for victims and fail to deliver efficient incentives to developers. In a forthcoming article—Algorithms Acting Badly: A Solution from Corporate Law—I show how a modest extension of corporate law’s underlying liability principles could close much of this liability gap.

Any liability scheme for algorithmic injuries must cover a broad range of cases. Some algorithmic injuries involve individual tragedies. For example, in 2017, an assembly robot bypassed safety protocols, entered an unauthorized area, and crushed employee Wanda Holbrook’s head.  In 2018, a self-driving car struck and killed pedestrian Elaine Herzberg as she was crossing the street.  Other algorithmic injuries are more systematic. For example, algorithms that extend loans or hire employees often discriminate against minority applicants. Stock-trading algorithms can artificially distort stock prices for higher profit. Price-setting algorithms can collude to raise costs for customers. 

Are such injuries just regrettable externalities of technological progress that victims should be left to bear? Or should someone else be civilly or criminally liable for the injury? The law does not always provide an answer, which can leave innocent victims holding the bag. Traditional legal doctrines for evaluating civil and criminal harms were developed at a time when injuries like those just described involved malicious or reckless conduct from a human agent.  But we are now entering a phase of technological and economic progress where that is not always the case—where all the people involved might be doing everything they should, and it is the machines that are misbehaving. The trouble is that algorithms are not legally cognizable actors, so their ‘actions’ are invisible to the law. Holbrook’s wrongful death suit was dismissed. Prosecutors charged no one in Herzberg death.  Suits alleging algorithmic discrimination, stock manipulation, and antitrust violations flounder about for a theory of liability.

Leaving victims to bear the losses of algorithmic injury is problematic for several reasons. For one thing, it is unjust. Injuries to body, dignity, and purse are no less pernicious today simply because an algorithmic cause has replaced a human one. For another, it is unfair. Algorithms and robots are generating and will continue to generate greater social welfare surpluses, but we cannot ask the discrete segments of society who tend to be on the receiving end of algorithmic injury to bear such a disproportionate share of the burden. Finally, it is inefficient.  Algorithmic injuries are externalities of technological progress. Without some regime for forcing internalization of those costs, there will be deficient economic incentive to mitigate them.

Even so, it is sometimes difficult to see what legal alternative there is to letting victims fend for themselves. Clearly there are some cases where that is the right result, as when the victim is himself to blame for the injury, eg if he jumps recklessly in front of a self-driving car. There are also some cases where current law does provide for clear liability, as when the directing hand behind the injury is actually a human being, eg where a manufacturer purposely designs his pricing algorithm to collude with others.  The law views the algorithm as a tool, extending the human’s agency and the scope of his liability. But many algorithmic injuries occur without human fault, whether on the part of victims or of developers. The power of modern machine learning is that it can solve problems in ways its developers and users could not have anticipated. That same power is what makes algorithmic harms inevitable. Where, as presently happens and increasingly will, intelligent algorithms injure innocent victims without a human hand directing them to do so, victims have no legal recourse.

The law already has a robust civil and criminal framework for balancing victim interests against the interests of potential defendants who injure them. A solution to the algorithmic legal liability gap requires finding a suitable defendant to plug into that framework. One approach would be for the law to create such a defendant.  Some ambitious scholars have argued that the law should recognize sophisticated algorithms as people capable of being sued. However, philosophical puzzles (are algorithms really people?), practical obstacles (how do you punish an algorithm?), and unexpected consequences (could algorithmic ‘people’ sue us back?) have proven insurmountable.

In Algorithms Acting Badly, I propose a less direct but more grounded approach. The cast of potential ‘people’ who can foot the financial and justice bill for algorithmic injury extends beyond victims, programmers, and the algorithms themselves. Corporations currently design and run the algorithms that have the most significant social impacts. Longstanding principles of corporate liability already recognize that corporations are ‘people’ capable of acting injuriously. Corporate law stipulates that corporations act through their employees when and because corporations have control over and benefit from employee conduct. But there is no reason to say that corporations can only act through their employees. The same control- and benefit-based rationales extend to corporate algorithms. If the law were to recognize that algorithmic conduct should largely qualify as corporate action, the whole framework of corporate civil and criminal liability would kick in.

The ‘beneficial-control account’ that I develop treats algorithmic injury as a species of corporate action when corporations have control over and seek to benefit from the underlying algorithms. This gives deserving victims a potential corporate defendant from whom to seek justice. Furthermore, when a corporation controls an algorithm, the potential for liability will encourage the corporation to exercise greater care in designing, monitoring, and modifying the algorithm. This will result in fewer algorithmic injuries. The internal logic of the beneficial-control account also protects corporations from potential suit when liability would be unfair (because the corporation never purported to benefit from the algorithm) or unproductive (because the corporation is not in sufficient control of the algorithm to fix it). By exercising the control it already has over corporations, the law can help ensure that algorithms operate responsibly.

Mihailis E Diamantis is an Associate Professor of Law at the College of Law, University of Iowa.

Share

With the support of