The Parity Model: a new framework for values-based AI decision-making
As artificial intelligence (AI) systems take on increasingly complex roles in society – from recruitment decisions to criminal justice assessments – serious questions arise about how these systems align with human values. Professor Ruth Chang, Chair and Professor of Jurisprudence in Oxford’s Faculty of Law, is leading a pioneering new research project that addresses one of the most critical challenges in AI development: how machines handle difficult choices.
Supported by a UKRI Systemic AI Safety grant, Professor Chang’s project aims to fundamentally reshape how AI systems are designed to process value-laden decisions. At the heart of her research is the ‘Parity Model’, a novel philosophical and mathematical framework that departs from traditional models of rational choice.
“We’re witnessing a juggernaut of AI development that’s deeply troubling,” says Professor Chang, who is also a Professorial Fellow at University College, Oxford.
AI is increasingly being used to make normative, evaluative decisions – like determining fair sentencing or allocating healthcare – but its foundational assumptions about value and choice are flawed.
Most AI systems today rely on what Professor Chang terms the ‘trichotomy assumption’ – that when comparing two options, one must be better, worse or equal to the other. This framework, rooted in ancient numerical methods of value comparison, is ill-suited to many real-world situations where decisions are complex and multidimensional.
In contrast, the Parity Model recognises that many important human decisions are not suitable for simple ranking. Choosing between two differently qualified job candidates or between fundamentally different life paths, for instance, may involve options that are neither better nor worse nor equal, but rather ‘on a par’. These are what Professor Chang calls ‘hard choices’, where each option is better in some respects but worse in others.
To address this, Professor Chang’s research team – which includes robotics expert and grant co-lead Professor Nick Hawes, postdoctoral engineering researcher Dr Luigi Bonassi, engineer James Wilson of Dyson, benchmarking expert Oishi Deb, engineering student Lukang Guo, DeepMind’s Dr Sian Gooding, and logician and philosopher Professor Kit Fine – is developing computational tools to identify and process hard choices within AI systems. The aim is to build decision-making models that explicitly incorporate human input, allowing values to be clarified, committed to, and evolved through interaction. Professor Chang believes that an interactive model of this type could be important for value alignment in decision-making areas that are rife with hard choices, such as law, health, business, politics and governance.
One area of particular focus is recruitment. “Traditional, rule-based AI systems might work well here,” explains Professor Chang, “because they can be programmed with evaluative criteria, and our model ensures that when a hard choice arises, the system flags it for human engagement.” Instead of forcing a binary decision, the model invites human agents – such as hiring committees – to step in, clarify what values matter most in the situation, and help shape not only the outcome but the algorithm going forward.
Another strand of the research explores how large language models (of the type used in tools such as ChatGPT) could help individuals navigate personal hard choices. The team is prototyping an app that allows users to input choices they face, identifies whether the situation constitutes a hard choice, walks them through the values at stake, and then elicits from them possible commitments that can resolve the choice.
This approach to AI design is not just academically innovative – it is, argues Professor Chang, essential for AI safety:
The real danger is not just bias or technical failure. It’s that we are building systems that misrepresent human values by trying to approximate them using surface-level data points.
Professor Chang also cautions that conventional regulatory approaches may be insufficient. She says: “Law is often too slow and reactive to keep pace with technological change. We need to change AI at the design level. Regulation can’t do the job alone.”
While the current project is relatively modest in scale – a one-year initiative using simplified ‘toy’ models – the ambition is to spark further research and, ultimately, bring about a reorientation of how AI systems interact with the human condition.
Another project involving Professor Chang that is in the early stages of development involves senior military leaders from the United States Department of Defense and the UK Ministry of Defence. It is being led by Bryce Goodman, Chief Innovation Officer at the US Department of Defense, and also includes Professor Stuart Russell (one of the ‘godfathers’ of AI), leading computer scientist Ryan Lowe (formerly of OpenAI), and Professor Aaron ‘Blair’ Wilcox (Army War College).
The idea behind the project is to develop personalised algorithms for senior military commanders that recognise hard choices they face in the context of both war games and operations in the field. These algorithms will help them navigate these choices by reference to values rather than preferences or command protocols. The aim is also to uncover differences in military decision-making on both sides of the Atlantic.
Professor Chang welcomes contact from engineers, neuroscientists, cognitive scientists, computer scientists and funders interested in collaborative work on the theme of AI, values and decision-making. She can be emailed at ruth.chang@law.ox.ac.uk.