More than a quarter century after civil rights activists pioneered America’s first ridesharing network during the Montgomery Bus Boycott, the connections between transportation, innovation, and discrimination are again on full display. Industry leaders such as Uber, Amazon, and Waze have garnered widespread acclaim for successfully combatting stubbornly persistent barriers to transportation. But alongside this well-deserved praise has come a new set of concerns. Indeed, a growing number of studies have uncovered troubling racial disparities in wait times, ride cancellation rates, and service availability in companies including Uber, Lyft, Task Rabbit, Grubhub, and Amazon Delivery. The combined weight of the evidence suggests a cautionary tale: the same technologies capable of combatting modern discrimination also appear capable of producing it.

Surveying the methodologies employed by these studies reveals a subtle, but vitally important, commonality. All of them measure discrimination at a statistical level, not an individual one. As a structural matter, this is not coincidental. As the world transitions to an increasingly algorithmic society, all signs now suggest we are leaving traditional brick-and-mortar establishments behind for a new breed of data driven ones. Yet in doing so, we are taking discretion out of the hands of individual decision-makers and putting it into the hands of algorithms. This transfer holds genuine promise of alleviating the kinds of overt prejudice that would have been familiar to activists in the Civil Rights Era of the 1960s. But is also means that discrimination itself will go digital. And when it does occur, it will manifest—almost by definition—at a macroscopic scale.

Why does this seemingly trivial distinction between in-person and statistical discrimination matter? Because not all of America’s civil rights laws cognize statistically-based discrimination claims. And as it so happens, Title II of the Civil Rights Act of 1964—one of the country’s most canonical statutes—may be among them. Today, a tentative consensus holds that certain major civil rights legislation do not extend to claims of ‘discriminatory effect’ based in statistical evidence. But, more than a quarter century after Title II’s passage, it remains genuinely unclear whether the statute is among them.

My article ‘Title 2.0: Discrimination Law in a Data Driven Society’ begins to explore the implications of this doctrinal uncertainty in a world where statistically-based claims are likely to be pressed against data driven establishments with increasing regularity. The goals of the article are twofold. First, it seeks to build upon adjacent scholarship by fleshing out the specific structural features of emerging business models that make Title II’s cognizance of ‘disparate effect’ claims so urgent. In doing so, it argues that it is not the ‘platform economy’ per se that poses an existential threat to the statute but something deeper. The true threat, to borrow Lawrence Lessig’s framing, is architectural in nature. It is the algorithms underlying ‘platform economy businesses’ that are of greatest doctrinal concern—regardless of whether such businesses operate inside the platform economy or outside it. Second, the article joins others in calling for policy reforms focused on modernizing our civil rights canon. It argues that our transition from the ‘Internet Society’ to the ‘Algorithmic Society’ will demand that Title II receive a doctrinal update. If it is to remain relevant in the years and decades ahead, Title II must become Title 2.0.

Bryan Casey is a Lecturer at Stanford Law School and a Fellow at the Center for Automotive Research at Stanford.