AI and healthcare: the legal tensions behind technological advances
Artificial intelligence (AI) is increasingly being deployed in healthcare – from diagnostic tools that analyse medical scan images to triage systems that prioritise patients for treatment.
But these technologies are not neutral.
Biases embedded in medical AI can mean that some groups are misdiagnosed or underserved more than others. For Zoya Yasmine, a DPhil student in the Faculty of Law and an Oxford Thatcher Scholar at Somerville College, this raises urgent questions about the role of law in mitigating these risks.
“I’m looking at how the law interacts with biases in medical AI,” she explains. “Biases exist in technology, just as they do within healthcare systems more generally. But I want to see whether the law actually helps or hinders our ability to mitigate those biases. I focus on two areas of law in particular: intellectual property and data protection, and how these govern access to and control over the information needed to mitigate biases in medical AI.”
In one high-profile case in the United States, a patient triage algorithm used in hospitals recommended white patients for additional care twice as often as equally sick Black patients. “The reason was that the algorithm used an individual’s expenditure on care as a proxy for medical need,” says Zoya. “But because of financial barriers to accessing healthcare in the US, cost of care doesn’t map on to how sick someone really is.” An early chapter in Zoya’s doctoral thesis, meanwhile, will explore a UK-based case study of an AI tool already in use in the NHS to triage patients for suspected skin cancer. “The issue here is that the model was trained mostly on images of white skin,” she says, “so it can perform worse on patients with darker skin, where lesions and moles present differently.”
Because such AI systems are often based on proprietary technology, researchers can struggle to gain access to the underlying algorithms and scrutinise how they work. This scrutiny is important because medical AI has been shown to make predictions based on spurious correlations, as seen in the cost of care example. For Zoya, this illustrates a wider problem:
Companies often protect their datasets or the reasoning of their models as trade secrets. If developers know they never have to open them up, there’s less incentive to check for bias themselves in the early development process. And it also means outsiders can’t audit or test them.
Her thesis, supervised by Professors Ignacio Cofone and Dev Gangjee, will consider – among other things – how trade secret protections can clash with transparency requirements in data protection law. Under the UK’s implementation of the General Data Protection Regulation (GDPR), patients subject to automated medical decisions have a right to ‘meaningful information’ about the logic involved. But IP protections can restrict what companies disclose. “I look at the specific tensions within and between IP and data protection,” says Zoya. “In this context, patients might feel they’ve been wrongly diagnosed because of biases in the model, but without access to the relevant information, they have no way to challenge it. And developers are not required to build in mitigating measures to make the reasoning more transparent from the outset.”
Unlike the EU, which has passed a dedicated AI Act, the UK has taken what the previous government described as a “pro-innovation approach,” relying on existing regulators and sector-specific rules. Zoya argues that this makes it especially important to consider how established frameworks can be applied to medical AI – and remain meaningful.
“The GDPR is really powerful in this space, but we may not be enforcing it as much as we should, or providing enough AI-specific guidance,” she suggests. “Even with the EU AI Act, many of the obligations overlap with GDPR, which shows the regulation already has strong footing and relevance here.”
Working in collaboration with medical and technical researchers, Zoya aims to map out the points where data protection and IP law intersect and conflict, and to consider how these legal frictions have tangible effects in healthcare. She says: “Often IP and data protection are discussed separately, but in practice they collide. In this state of conflict, what we think the law is doing may not actually happen in real-world settings.”
Zoya sees the law as a double-edged sword – on one hand obstructing efforts to address bias, and on the other providing essential safeguards that could require medical AI companies to change how they design and develop their technology. “A lot of thoughtful research is being done on how to mitigate bias and make medical AI more equitable,” she says. “But the law can prevent researchers from accessing the data they need, and currently it fails to oblige developers to make models more transparent.
Ultimately, I want to identify the tensions between IP and data protection to show how biases in medical AI are not just a technical issue but also shaped by a set of legal conditions that determine how AI is designed. These legal conditions aren’t just abstract: they have implications for the health of real people, in the real world.