Autonomous systems are entering our everyday lives. But what are they in legal terms? Are we just looking at sophisticated objects? Or should autonomous systems be treated as legal subjects, somewhat similar to humans? In a recent research paper, I propose to implement a ‘halfway’ or ‘in-between’ status: Teilrechtsfähigkeit—partial legal capacity.

The debate around the legal status of autonomous systems generally focuses on granting legal personality. The European Parliament’s appeal for a EU-wide ‘robot-law-regime’ is the perfect example. According to the Parliament, autonomous systems create so-called ‘responsibility gaps’—doctrinal vacuums that have to be filled by specifically crafted new legal rules. But the proposal goes even further than that. In the long run, the Parliament calls for the implementation of ‘a specific legal status’, an ‘electronic personality’ to handle ‘cases where robots make autonomous decisions or otherwise interact with third parties independently’. 

Although those ‘responsibility gaps’ do exist and legal personality could solve many doctrinal problems, many scholars strongly oppose the idea. And for a good reason: granting legal personality is—at least in Western jurisdictions—foremost a matter of morality. Morality demands that human beings (and to some extent human associations as well) must be considered as persons by the law. Therefore, applying legal personality to autonomous systems would most likely culminate in what I call the ‘humanization trap’: the law would not only move a formerly marginal occurrence to the center of attention. What is more, it would put autonomous systems on the same legal level with human beings. And once an entity reaches this point, the trap snaps: The impact of morality does not end with demanding a legal status, legal personality comes with a presumption in the favor of persons that they have legal capacity not only in the abstract, but also with regard to specific rights and obligations. In other words, there is an urgent need for justification whenever certain rights and obligations are excluded from persons. Thus, it would be necessary to legitimize why autonomous systems should not enjoy the same rights and privileges that others persons enjoy, such as worker protection or even constitutional rights. 

With that in mind, it appears that we are stuck in a dilemma. Granting legal personality seems crucial to close the ‘responsibility gaps’, but at the same time it will invoke the ‘humanization trap’. However, there is a way out: a ‘halfway’ or ‘in-between’ status. And the good news is we do not have to start from scratch. German law offers exactly such a status: Teilrechtsfähigkeit or partial legal capacity. Teilrechtsfähigkeit grants legal subjectivity, yet based on certain legal capabilities. Put simply, it is an application of the principle that form follows function which was and is characteristic for architects associated with the Bauhaus school. 

Unlike legal personality, Teilrechtsfähigkeit is not a matter of morality. It stands for the notion that law itself can mold its actors according to its own particular terms and conditions. A company which is about to be incorporated is a classic example: it is considered a legal subject only insofar as this is necessary for the formation process (eg concluding sales or employment contracts), but its subjectivity ends whenever certain legal capabilities require formal registration. Consequently, the burden to justify the allocation of legal capacities is the exact opposite: while legal personality involves subtracting, Teilrechtsfähigkeit is concerned with adding rights and obligations. Applied to autonomous systems, one would no longer have to justify why they should notenjoy worker protection or constitutional rights—one must justify why their specific function requires those rights.

If the legal status of Teilrechtsfähigkeit follows function, the primary question becomes what function autonomous systems take on. By looking at their areas of application, it is fair to say that right now, autonomous systems are sophisticated servants. They take on activities which humans are either incapable or unwilling to perform. At least for the time being, almost all of them do not purposefully pursue interests of their own. An autonomous car does not drive for driving’s sake, it drives to transport its occupant to a certain destination. A trading algorithm does not trade on its own account, but on the account of the human (or the company) who deploys it. In other words, we are looking at the typical ‘master-servant situation’, in which the servant acts autonomously, but on the master’s behalf.

Therefore, autonomous systems should be treated as legal subjects insofar as this status reflects their function as sophisticated servants. Although one could imagine (future) scenarios in which autonomous systems require protection particularly from their masters, this is not a matter of urgent concern. For the time being, the focus lies on practical and doctrinal considerations: when can the status as a ‘servant’ help to solve legal problems? As I point out in my paper, Teilrechtsfähigkeit could provide a viable solution in the area of contract formation and even criminal law, for example.

Naturally, it is far from certain that this approach provides the first best solution. But it buys us time, after all. For now, the status of Teilrechtsfähigkeit could be a powerful tool to solve most of the urgent problems involving autonomous systems without having to take too much of a risk. Since we do not know where artificial intelligence will eventually lead us, it might not be the worst idea—as Germans would say: auf Sicht fahren, ie to drive only so fast that you can see a difficult situation coming early enough to take counteracting measures.

Jan-Erik Schirmer is a Senior Research Fellow at Humboldt-University Berlin.