Faculty of law blogs / UNIVERSITY OF OXFORD

Government-to-Robot Enforcement

Author(s)

Susan C. Morse

Posted

Time to read

4 Minutes

Regulatory compliance. Required filings. Tax returns.

Bored yet?

The mere idea of following a complex set of government rules is enough to make human eyes glaze over. The task presents itself as the legal equivalent of long divisions – tedious, time consuming, and possessed of a clearly correct answer. It may seem like the perfect job for a robot.

And indeed, robots – or automated computer systems – have begun to occupy the field of compliance. They respond to copyright takedown requests. They file tax returns. They keep the wage and hour records that fix compensation for tens of millions of Americans.

But these automated legal systems are not merely performing long-division-like tasks with clear answers. Instead, they make subtle legal judgments. They decide whether to presume the validity of a copyright takedown request. They determine if a self-driving car will avoid a pedestrian or an oncoming vehicle. They allow the employer (and purchaser of the software product) to round down the time worked by hourly employees to the nearest hour.

Some would require more human control over the robots that perform legal compliance. Larry Lessig argued that democracy should influence the design of cyberspace. Some would require human involvement in the development or application of law or build transparency and procedural controls into automated systems. Others advance a Federal Robotics Commission, a ‘technology meta-agency’, or more regulatory activism with respect to system design.

Yet these ex ante solutions often would require government to take on automated law systems at the heart of robotic technological strength, for instance in the development of their software programs.  If government cannot build technical capacity to match that of the market, the effort may fail. Also, some ex ante solutions rely on the ‘assumption that systems operate using a logic train that can be understood by a human auditor. This may be a correct assumption for a system that runs logical programming rules or algorithms. But ‘explainability’ will likely decrease as automated law systems increasingly use artificial intelligence techniques like machine learning to make decisions.

The idea I explore in Government-to-Robot Enforcement differs from the ex ante solutions already proposed. It takes an ex post view and does not suggest that government should directly modify computer code. Instead, government-to-robot enforcement suggests that automated law systems have a vulnerability that enforcers will begin to exploit. This vulnerability is centralization. The prediction is that government will regulate users of automated law systems by directing enforcement efforts against the centralized robots rather than against individual users.

Centralization is an essential feature of automated law systems. The centralization of legal decisions in an automated law system, for instance through a computer software program, provides the economy of scale that makes the automated system an efficient and relatively low-cost provider of legal compliance.

Centralization of automated law is also an opportunity for the enforcer of government regulations. Right now, limited government resources prevent the government from finding and penalizing all noncompliance. Underdetection and underenforcement are widespread. But government-to-robot enforcement could allow a new style of broad, efficient ex post enforcement. Features include:

  • Robot bears risk of error. Relocation of legal liability for compliance errors so that it lies the automated legal system itself, not with the users of the system. For instance, TurboTax would bear liability for errors made on tax returns filed using TurboTax.
  • Robot as defendant. Government alleges claim of noncompliance directly against the robot, or automated law system.
  • Subrogation. The automated law system controls the dispute, including decisions about settlement, appeal etc.
  • Strict liability. The automated law system bears strict liability for errors. The user may be required to compensate the system to the extent errors result from false facts input by the user.
  • Damages multiplier. The automated law system would pay an additional amount determined by a damages multiplier to account for the liabilities of other users arising from the same program error. This potential liability could require insurance.
  • Preclusion. The decision would preclude relitigation of the issues covered by the damages multiplier, such as all filings by a system with a particular issue in a particular year.

Government-to-robot enforcement could empower regulators to enforce rules more completely and efficiently than has ever before been possible. This could have positive results. Better enforcement of environmental regulations means less pollution. Better enforcement of tax rules means less tax avoidance, so that law-abiding taxpayers can bear a smaller, fairer share of the burden of public finance. Because all the users of an automated law system would bear the risk of legal errors made by the system, more compliant systems would cost less, and more aggressive systems would cost more.

But government-to-robot enforcement has dangers, too. Of course, better enforcement of rules is a good thing only if the rules are good policy to begin with. There are also more specific concerns that arise from the automated legal systems’ place at the centre of compliance activity.

Automated legal systems could unduly influence, or capture, government regulators, causing an anti-government tilt in the development of the law. Conversely, government could unduly influence, or ‘reverse capture’, automated systems, causing a pro-government tilt in the law.

These problems of capture and reverse capture are difficult to solve. The efficiency of government-to-robot enforcement arises precisely because it facilitates centralized negotiation and debate between government and robots about the content of the law, rather than requiring government to conduct enforcement actions against the many users of the automated law system. This results in more complete and efficient enforcement. It may also result in lower compliance costs.

Yet if the relationship between government and robot becomes close and cooperative, there is little space for an individual user to complain. She may even have contracted with the automated law system to transfer to the robot the responsibility of responding to a government claim that the user has violated a rule. Intuit’s TurboTax offers just such a service, called Audit Defense.

Government-to-robot enforcement would narrow individuals’ chances to take a novel legal position. This could slow the development of the law. Think of the landmark 2013 Constitutional case of United States v. Windsor, in which the Supreme Court concluded that the federal government could not refuse to respect a same-sex marriage that was valid under state law. The case involved Edith Windsor’s estate tax return position that she was entitled to a surviving spouse exemption upon the death of her wife. Could Edith Windsor have brought that case if her tax return had fallen under a government-to-robot enforcement regime?

Government-to-robot enforcement offers opportunity subject to caution. Empowering regulators to do a better job enforcing the law is generally a good thing, so long as less illegal pollution or less illegal tax avoidance is a good thing. But enabling the law to develop along lines agreed to by automated compliance systems and the government could endanger the quality and development of the law. Government-to-robot enforcement calls for mitigating this risk. It should allow external avenues to challenge the results of robots’ decisions about the content of the law, even when those results are endorsed by the government.

Susan C. Morse is the Angus G. Wynne, Sr. Professor in Civil Juriprudenceat the University of Texas School of Law.

Share

With the support of