Faculty of law blogs / UNIVERSITY OF OXFORD

Move Fast and Break Things: Law, Technology, and the Problem of Speed

Author(s)

Simon Chesterman
Dean and Professor at the Faculty of Law, National University of Singapore

Posted

Time to read

3 Minutes

Since computers entered into the mainstream in the 1960s, the efficiency with which data could be processed has raised regulatory questions. This is well understood with respect to privacy. Data that was notionally public—divorce proceedings, say—had long been protected through the ‘practical obscurity’ of paper records. When such material was available in a single hard copy in a government office, the chances of one’s acquaintances or employer finding it was remote. Yet when it was computerized and made searchable through what ultimately became the Internet, such practical obscurity disappeared.

Today, high-speed computing poses comparable challenges to existing regulatory models in areas from securities regulation to competition law, merely by enabling lawful activities—trading in stocks, or comparing and adjusting prices, say—to be undertaken more quickly than previously conceived possible. Many of these questions are practical rather than conceptual. Nevertheless, current approaches to slowing down such decision-making—through circuit-breakers to slow or stop trading, for example—are unlikely to address all of the problems raised by the speed of AI systems currently in use.

My article considers the regulatory challenges posed by speed. Many of the transformations in the digital economy—breathlessly referred to as the ‘fourth industrial revolution’—are more accurately linked to the speed and efficiency of data processing rather than true cognitive ability or ‘intelligence’ as such. Speed has, nevertheless, raised legal problems when rules designed for twentieth century society are confronted with the changing practices of the twenty-first. The article examines three such challenges.

The first is the best known; the effacement of distance by the speed with which data can flow around the world. Cyber and Internet law are now sub-disciplines in their own right, raising complex jurisdictional and practical issues in regulating online behaviour. The focus here is the combination of those structural features with increasingly sophisticated software, posing difficulties for would-be regulators in areas from protection of intellectual property to combating ‘fake news’.

The second area considered is high-frequency trading, epitomized by the ‘flash crash’ of 2010, in which a trillion dollars was wiped off the New York Stock Exchange—and then (mostly) reappeared. In theory, algorithms executing trades are subject to the same regulations as the human brokers that set them in motion. In practice, the possibility of disruption or manipulation due to the speed at which those algorithms operate has led bourses to explore ways of slowing them down. There is also a larger argument that computer-based trading has changed not only the culture but also the very nature of the market.

A third set of problems concerns competition law, also known as antitrust. The digital economy offers consumers access to information previously unimaginable in any traditional marketplace. Yet, that information and more is also available to retailers who are able to use pricing software to maximize profits. In the past, anti-competitive conduct required proof of a meeting of the minds to collude on prices or abuse market dominance. The speed with which prices can be adjusted today means that tacit collusion may take place without any intent on the part of market actors—or even without any formal coordination between their computer programs.

Individually, these challenges point to practical obstacles to regulation of information technology in a globalized world. Together, particularly when combined with AI systems that are autonomous and opaque, they show the danger that those systems will operate in a manner that is uncontainable, unstoppable, or undetectable. ‘Move fast and break things’ was an early motto at Facebook intended to push developers to take risks; the phrase appeared on office posters and featured in a letter from Mark Zuckerberg to investors when the company went public in 2012. Over time, it came to be embraced as a mantra applicable to technological disruption more generally, adopted by countless Silicon Valley imitators. As Facebook matured, however, and as the potential harms caused by such disruption grew, the slogan fell from favour.

The speed discussed here concerns processing power and connectivity rather than innovation, but it is likely that a similar reckoning will come for the digital economy. One way of addressing the problems identified here is through slowing everything down: localizing and compartmentalizing data, introducing artificial latency in trading algorithms, throwing sand in the gears of the digital marketplace. Such an approach may be the only way of continuing to rely on regulatory tools designed for humans and operating on a human timescale, but it runs the risk of undermining what makes those systems valuable in the first place.

It is also unsustainable. Whether or not one accepts predictions that processing power will continue to increase forever, the prospect of slowing it down or stopping it any time soon is remote. New rules and new institutions will be required, together with at least some role for AI systems themselves in investigating and upholding the law.

Simon Chesterman is Dean and Professor at the Faculty of Law, National University of Singapore. 

Share

With the support of