Faculty of law blogs / UNIVERSITY OF OXFORD

Autonomy by Algorithm? A Paradox

Author(s)

Marietta Auer
Director, Max Planck Institute for Legal History and Legal Theory, Frankfurt

Posted

Time to read

5 Minutes

 

 

This post is part of a special series including contributions to the OBLB Annual Conference 2022 on ‘Personalized Law—Law by Algorithm’, held in Oxford on 16 June 2022. This post comes from Marietta Auer, who participated on the panel on ‘Law by Algorithm’.

We live in a world where the person, as an indivisible entity, is increasingly displaced as a conceptual tool within the framework of the algorithmic society. ‘Indivisible’ here means exactly what it sounds like: incapable of division or in-divisible. Because algorithms are now in a position to generate ever more precise representations of the empirical human being, as the product of digital information, it is now possible to circumvent the indivisible, in-dividual character of the person as a social construct in a variety of contexts.

But what effect does this have on how we understand ourselves as human beings and persons? We attribute to ourselves the indivisible quality of moral subjecthood as the basis of our autonomy. This premise, however, is put to the test when the indivisible quality of the person disappears behind algorithmically generated type profiles. The individual person might then no longer be conceived as indivisible, equal and free, but rather as divisible, calculable, predictable and unfree. In the algorithmic society, the real threat involves the individual disappearing behind big data and its algorithmic readability. The individual becomes irrelevant for the purposes of legal and economic control, because data statistics literally know the individual person better than she knows herself.

The single most surprising aspect of this development, however, is that we seem to want society to develop in this way. Herein lies the real challenge when it comes to thinking about our autonomy: we assume quite naturally that we continue to exist as in-divisible subjects who are capable of exercising normatively significant autonomy even when we become fully predictable digital agents. Formulated in terms of a paradox, what does it tell us about our understanding of autonomy that we believe we are fully autonomous in light of the heteronomous predictability of our own use of autonomy?

Let me use an example to illustrate the point. Consumer data is widely exploited by businesses for the purpose of algorithmic customer profiling and the manipulation of transactions with consumers in various ways. One such case, analysed by Eidenmüller and Wagner, is that of businesses siphoning consumer rents through algorithmically driven price discrimination. This is done by exploiting the digitally available consumer histories of individual customers for personalized pricing, or simply by extracting higher prices from financially stronger consumers. But the problem goes much deeper. There is a whole field of scholarship on the algorithmic assessment of the customer lifetime value (CLV) which expresses how valuable a particular customer will be for the company over his or her entire life. A higher CLV is rewarded, for instance, with better service for premium customers. Given that customers have always been rated and differentiated according to their perceived value for a given company, however, wherein lies the problem? When CLV is measured by means of algorithmic profiling, it becomes much harder to escape its metrics by acquiring the good will of a particular trader by means of forming a personal relationship. Instead, the customer will likely remain trapped in a digital web based on a multitude of group-related criteria that cannot be changed by individual behaviour. Such criteria include, but are not limited to, gender, race, domicile and financial history, all of which are readily available through rating agencies. Algorithmic profiling thus makes it quite difficult to escape from social patterns based on statistical figures measured against entire population groups. This is precisely what it means that individuals are no longer visible as individuals: the customer, as an individual, is no longer relevant as the focal point of assessment for the commercial enterprise. What matters for the digital economy is a customer’s statistical lifetime purchase expectancy measured against that of all other consumers. Not only is this approach more precise than any individual assessment of a given customer, it even works without ever having to meet the customers.

The downside here is that the alleged neutrality and value blindness of rating algorithms remains a fiction. Algorithms produce statistical artefacts that can perpetuate existing forms of discrimination on a statistical basis. This means imposing non-matching identities on individuals that can in fact harm them by reproducing pre-existing discriminatory biases on a much stronger, seemingly scientific basis. In other words, algorithms provide us with a compelling reason to discriminate, for instance, against women on the labour or credit market. But need this be the case? Aren’t algorithms better equipped as neutral decision-makers than humans? I am afraid that the answer is at least a qualified ‘no’. As I stated above, the problem of digital profiling is that the individual disappears behind a smokescreen of algorithmic pattern matching. This obscures not only the person with her individual concerns, but it also conceals two points of reference that usually serve as the basis for any legal recourse against discrimination. First, the causal connection between the discriminatory act and the actor falls away when that actor is just an algorithm. Individual accountability for discriminatory acts dissolves when the discrimination is only the statistical result of an unintentional algorithmic process in which neither the intention of the code developers nor even the particular data of the discriminated individuals matter. Second, the procedural tools through which individuals can normally assert their rights to privacy and equal protection are jeopardized by algorithmic pattern matching. Individual consent to the use of data becomes pointless within the collective ontology of algorithmic profiling. Individual rights to privacy and equal treatment become structurally powerless in the realm of statistics.

The latter point gains additional traction because it is no longer true that the individual can simply choose to opt out of the use of most digital services. Partaking in the digital world and thus willingness to share one’s data on a daily basis has become more a question of social access than of free choice. This again highlights the paradox of personal autonomy in the digital world: algorithmic profiling now counts among the prerequisites for our self-definition as autonomous subjects in today’s society. We pursue digital self-realization on a daily basis by voluntarily disclosing our data in the algorithmic sphere. And we insist on being free to do so and not being condemned to anonymity and invisibility in social media. One might even go as far as to say that a new self-image of the digital person is emerging, one of whose fundamental freedoms it is to voluntarily give up her right to privacy and autonomy to tools of algorithmic communality.

So, what exactly is the paradox? As I alluded to above, the problem has more to do with the structure of freedom to which we condition ourselves in digital contexts than any specific content. This structure is the outcome of a collective self-education, supported by all of us on a daily basis, towards a use of freedom which takes the autonomous decision to act as morally responsible beings out of our hands, while freedom itself, in a reduced, algorithmically streamlined form, becomes the object of consumption. In other words, we consume freedom and thereby unlearn the use of autonomy. Algorithmic profiling provides the perfect infrastructure for this self-reinforcing process of steadily declining autonomy in a world of ever-increasing digital choices. Ultimately, we will not succeed in drawing a precise line between the socially desirable ‘yes’ and the undesirable ‘no’ in the digital world, because the paradox of autonomy is precisely that it cannot stop regarding itself and its choices as autonomous even in a context in which freedom is merely a product of algorithmic conditioning.

Marietta Auer is Director, Max Planck Institute for Legal History and Legal Theory, Frankfurt a.M.

This post is part of an OBLB series on Personalised Law—Law by Algorithm. The introductory post of the series is available here. Other posts in the series can be accessed from the OBLB series page.

Share

With the support of