Faculty of law blogs / UNIVERSITY OF OXFORD

The Rise of Robots and the Law of Humans

Artificial Intelligence (‘AI’) is now rapidly changing how we live and work. As routine tasks (both manual and cognitive) become increasingly automated, it is anticipated that robots (‘embodied AI’) will take approximately 1/3 of jobs in traditional professions by 2025. The law will shape the future of AI. It will determine the permissible uses of AI, the costs of new products and technologies, among other things. Further, the initial regulatory decisions will be crucial. They may create path dependencies and make it hard to change regulatory course later.

Regulating AI is going to be challenging and difficult. After all, the law is – and always has been – made by humans and for humans. Just think of fundamental concepts such as ‘personhood’ and ‘legal personality’. Historically, these concepts related to humans, ie natural persons. AI will thus strain the legal system: how shall we deal with robots? Shall we accord them legal personality, give them the right to acquire and hold property and to conclude contracts, etc?

In a recent paper, I attempt to answer these and other fundamental questions raised by the rise of robots and the emergence of ‘robot law’. The paper is based on my inaugural lecture as the Freshfields Professor of Commercial Law at the University of Oxford on 9 June 2016.

The main theses developed in the paper are the following: (i) Robot regulation must be robot- and context-specific. This requires a profound understanding of the micro- and macro-effects of ‘robot behaviour’ in specific areas. (ii) (Refined) existing legal categories are capable of being sensibly applied to and regulating robots. (iii) Robot law is shaped by the ‘deep normative structure’ of a society. (iv) If that structure is utilitarian, smart robots should, in the not too distant future, be treated like humans. That means that they should be accorded legal personality, have the power to acquire and hold property and to conclude contracts. (v) The case against treating robots like humans rests on epistemological and ontological arguments. These relate to whether machines can think (they cannot) and what it means to be human.

I develop these theses primarily in the context of self-driving cars – robots on the road with a huge potential to revolutionize our daily lives and commerce.

In the paper, I also consider policy problems that relate to or arise from my findings. First, given the significant differences in the ‘deep normative structure’ of different societies, it will be quite difficult for states to agree on common policies. Hence, robot law will probably be characterized by much regulatory diversity and regulatory competition. The incentives for states to attract investment in AI will also spur regulatory competition. It seems likely that ‘utilitarian states’ will enact more ‘robot friendly’ laws, putting pressure on other jurisdictions to follow suit.

Second, the question of access to AI must be raised. There can be no doubt that such access is a significant source of power. Potent private actors might leverage themselves with smart technologies to shape transactions to their advantage. Less sophisticated parties may lose out. Are certain smart technologies public goods? Should they be accessible at low costs to all?

Finally, AI will also fundamentally change law-making and the legal profession. This raises the intriguing question whether, at some point in time, smart (AI-based) law-making will assist us in regulating AI products and services. It is beyond doubt that smart technologies will be a great aid in enhancing the efficiency of law-making on a technical level. It is quite a different matter whether these technologies will be able to assist us to tackle complicated regulatory problems that require intricate value judgments. On this point, I am deeply sceptical. Again, machines cannot think, nor solve deep philosophical problems.

Horst Eidenmüller is the Freshfields Professor of Commercial Law at the University of Oxford.

Share

With the support of