Faculty of law blogs / UNIVERSITY OF OXFORD

Algorithms, Correcting Biases

Author(s)

Cass R. Sunstein
Robert Walmsley University Professor, Harvard University

Posted

Time to read

5 Minutes

Are algorithms biased? In what respect?

These are large questions, and there are no simple answers. My goal here is to offer one perspective on them, principally by reference to some of the most important current research on the use of algorithms for purposes of public policy and law. I offer two claims. The first, and the simpler, is that algorithms can overcome the harmful effects of cognitive biases, which can have a strong hold on people whose job it is to avoid them, and whose training and experience might be expected to allow them to do so. Many social questions present prediction problems, where cognitive biases can lead people astray; algorithms can be a great help.

The second claim, and the more complex one, is that algorithms can be designed so as to avoid racial (or other) discrimination in its unlawful forms—and also raise hard questions about how to balance competing social values. When people complain about algorithmic bias, they are often concerned with race and sex discrimination. It should be simple to ensure that algorithms do not discriminate in the way that most legal system forbid. It is less simple to deal with forms of inequality that concern many people, including an absence of ‘racial balance’. As we shall see, algorithms allow new transparency about some difficult tradeoffs.

The principal research on which I will focus comes from Jon Kleinberg, Himabinku Lakkaraju, Jure Leskovec, Jens Lutwig, and Sendhil Mullainathan, who explore judges’ decision whether to release criminal defendants pending trial. Their goal is to compare the performance of an algorithm with that of actual human judges, with particular emphasis on the solution to prediction problems. It should be obvious that the decision whether to release defendants has large consequences. If defendants are incarcerated, the long-term consequences can be very severe. Their lives can be ruined. But if defendants are released, they might flee the jurisdiction or commit crimes. People might be assaulted, raped, or killed.

In some places, the decision whether to allow pretrial release turns on a single question: flight risk. To answer that question, judges have to solve a prediction problem: What is the likelihood that a defendant will flee the jurisdiction? In other places, the likelihood of crime also matters, and it too presents a prediction problem: What is the likelihood that a defendant will commit a crime? (As it turns out, flight risk and crime are closely correlated, so that if one accurately predicts the first, one will accurately predict the second as well.) Kleinberg and his colleagues build an algorithm that uses, as inputs, the same data available to judges at the time of the bail hearing, such as prior criminal history and current offense. Their central finding is that along every dimension that matters, the algorithm does much better than real-world judges.

Among other things, use of the algorithm could maintain the same detention rate now produced by human judges and reduce crime by up to 24.7 percent!

Alternatively, use of the algorithm could maintain the current level of crime reduction and reduce jail rates by as much as 41.9 percent. That means that if the algorithm were used instead of judges, thousands of crimes could be prevented without jailing even one additional person. Alternatively, thousands of people could be released, pending trial, without adding to the crime rate. It should be clear that use of the algorithm would allow any number of political choices about how to balance decreases in the crime rate against decreases in the detention rate.

A full account of why the algorithm outperforms judges would require an elaborate treatment.  But for my purposes here, a central part of the explanation is particularly revealing: judges do poorly with the highest-risk cases. (This point holds for the whole population of judges, not merely for those who are most strict.) The reason is an identifiable bias; call it Current Offense Bias:

As it turns out, judges make two fundamental mistakes. First, they treat high-risk defendants as if they are low-risk when their current charge is relatively minor (for example, it may be a misdemeanor). Second, they treat low-risk people as if they are high-risk when their current charge is especially serious. The algorithm makes neither mistake. It gives the current charge its appropriate weight. It takes that charge in the context of other relevant features of the defendant’s background, neither overweighting nor underweighting it. The fact that judges release a number of the high-risk defendants is attributable, in large part, of overweighting the current charge (when it is not especially serious).

There is a broader lesson here. If the goal is to make accurate predictions, use of algorithms can be a great boon for that reason. For both private and public institutions (including governments all over the world), it can eliminate the effects of cognitive biases.

The possibility that algorithms will promote discrimination on the basis of race and sex raises an assortment of difficult questions. But the bail research casts new light on them. Above all, it suggests a powerful and simple point: Use of algorithms will reveal, with great clarity, the need to make tradeoffs between the value of racial (or other) equality and other important values, such as public safety.

Importantly, the algorithm is made blind to race. Whether a defendant is black or Hispanic is not one of the factors that it considers in assessing flight risk. But with respect to outcomes, how does the algorithm compare to human judges?

The answer, of course, depends on what the algorithm is asked to do. If the algorithm is directed to match the judges’ overall detention rate, its numbers, with respect to race, look quite close to the corresponding numbers for those judges. Its overall detention rate for black or Hispanics is 29 percent, with a 32 percent rate for blacks and 24 percent for Hispanics. At the same time, the crime rate drops, relative to judges, by a whopping 25 percent. It would be fair to say that on any view, the algorithm is not a discriminator, at least not when compared with human judges.

The authors show that it is also possible to constrain the algorithm to see what happens if we aim to reduce that 29 percent detention rate for blacks and Hispanics. Suppose that the algorithm is constrained so that the detention rate for blacks and Hispanics has to stay at 28.5 percent. It turns out that the crime reduction is about the same as would be obtained with the 29 percent rate. Moreover, it would be possible to instruct the algorithm in multiple different ways, so as to produce different tradeoffs among social goals. A particularly revealing finding: if the algorithm is instructed to produce the same crime rate that judges currently achieve, it will jail 40.8 percent fewer blacks and 44.6 percent fewer Hispanics. It will do this because it will detain many fewer people, focused as it is on the riskiest defendants; many blacks and Hispanics will benefit from its more accurate judgments.

The most important point here may not involve the particular numbers, but instead the clarity of the tradeoffs. The algorithm would permit any number of choices with respect to the racial composition of the population of defendants denied bail. It would also make explicit the consequences of those choices for the crime rate.

There is no assurance, of course, that algorithms will avoid cognitive biases. They could be built so as to display them. The only point is that they can also be built so as to improve on human decisions, and precisely because they are bias-free.

The problem of discrimination is different and far more complex, and I have only scratched the surface here, with reference to one set of findings. It is important to ensure that past discrimination is not used as a basis for further discrimination. But a primary advantage of algorithms is unprecedented transparency: They will force people make judgments about tradeoffs among compelling but perhaps competing policy goals. Algorithms are sometimes challenged for being obscure and impenetrable. But they can easily be designed to ensure far more transparency than we often get from human beings.

Cass R. Sunstein is the Robert Walmsley University Professor at Harvard University.

This post is based on the author’s research paper ‘Algorithms, Correcting Biases’, forthcoming in Social Research.

Share

With the support of