From ‘personal’ to ‘plausible’: redefining information in the age of AI

 

Ignacio Cofone
Ignacio Cofone

Artificial intelligence has been described as a ‘general purpose technology’ comparable to electricity or the internet in its scope and societal reach. For Ignacio Cofone, Professor of Law and Regulation of AI in the Faculty of Law, this has profound implications for legal systems. “AI is often seen as regulated through dedicated statutes like the EU AI Act,” he says. “But because it touches on almost every aspect of our lives, it also affects and is affected by many other areas of law, such as data protection and anti-discrimination.”

Professor Cofone’s current research focuses on two central challenges: how AI reshapes the meaning of personal information, and how it complicates longstanding legal approaches to equality. Both, he argues, reveal gaps between traditional legal definitions and the realities of modern AI systems.

Data protection law is built on the idea that information belongs to individuals and can be regulated accordingly. Different jurisdictions emphasise different terms – ‘data subject’ in Europe or ‘personally identifiable information’ in the United States – but the principle is the same: information is worthy of protection if it identifies someone.

AI disrupts this assumption. “Most of the information AI uses is not personal information in the traditional sense,” says Professor Cofone. “It is about inferences. These can be about specific people, can be de-identified, or can be probabilistic information about groups – and often they fall outside existing protections.”

Social media apps like TikTok or Instagram offer a simple illustration. Without collecting conventional identifiers, their algorithms can quickly infer users’ preferences and characteristics by comparing their behaviour against vast datasets. For Professor Cofone, this points to the need for a shift in legal categories. “Instead of thinking about personal versus non-personal information,” he says, “we should be thinking about ‘plausible information’ based on algorithmic inferences – information that is commercially valuable and carries significant risks for privacy and fairness.”

This work builds on Professor Cofone’s book The Privacy Fallacy, in which he argues that existing privacy frameworks are ill-suited to the inferential, predictive environment of AI. His current project seeks to advance a concept of plausible information to fill that gap.

A second strand of Professor Cofone’s research examines how AI systems interact with anti-discrimination law. Algorithms can generate direct discrimination, but more often they produce indirect discrimination – disadvantaging certain groups without classifying people by legally protected demographic categories. 

This can lead to grey areas: employers using AI to help with recruitment, for instance, may be able to defend an algorithm’s discriminatory outputs and recommendations on the grounds of greater accuracy. “The question is whether we accept increased accuracy as a justification for discrimination,” says Professor Cofone. “If we always say yes, then the purpose of recognizing indirect discrimination in the law fails, because equality protections give way to business efficiency. But if we always say no, we risk shutting down the use of AI, since some biases are often inevitable.”

Professor Cofone’s research project on this topic, funded by the British Academy and the Leverhulme Trust, seeks to draw a line between cases where accuracy may justify bias and those where it should not. The aim is to guide courts on how to handle ‘business justification’ arguments in AI discrimination cases, creating precedents with positive social consequences.

A well-known example of algorithmic bias is COMPAS, a risk assessment tool used in the US criminal justice system to predict recidivism. Research has shown it significantly overestimates the likelihood of reoffending among Black and Hispanic men, while underestimating it for White men. “Machine learning systems optimise for whatever they are told to predict,” says Professor Cofone. “If you train them on arrest data, they will optimise for arrest outcomes – even though arrests are not neutral measures of crime. That means existing racial disparities are not just replicated but amplified through the use of the algorithm. And this could happen in any domain, from employment to university admissions.” 

We also need to remember, adds Professor Cofone, that when we use generative AI “we aren’t dealing with a real person – and if we did think of it as a person, it would be an eager intern who wants to please you and is very diligent at doing tasks but might not always give you good advice.”

How, then, should law respond to such a widespread and unpredictable technology? Professor Cofone emphasises that there is no single, straightforward answer. “AI is an umbrella term for very different systems,” he says. “A model that reconstructs medieval manuscripts and an algorithm used to decide who will go to jail should not be treated the same way.

AI challenges legal concepts we have long relied on. It forces us to revisit how we think about information, discrimination and fairness. The task for law is not only to regulate AI directly, but also to adapt the broader legal framework so that it continues to protect fundamental values in an AI-driven world.

On this page