The ‘invisible hand’ has become an icon for an economic philosophy which assumes that a laissez-faire governmental policy will result in optimal social and financial outcomes as market forces will drive a socially efficient equilibrium. Over the years and since the New Deal, this approach has been in tension with another philosophy, which emphasizes the role of regulation in correcting market failures that inhibit markets from functioning smoothly. In my recent paper, ‘The Invisible Hand, the Regulatory Touch or the Platform’s Iron Grip?’, I discuss the third type of market regulation—that of the multi-sided platform.
Multi-sided platforms are designed specifically to control and carefully craft parties’ underlying transactions. In the modern era, data-savvy platforms collect information on every participant and transaction; control entrance, information flow and competition design; assist in searching and matching; set the contractual terms (and sometimes even price) of a finalized transaction; allocate risk between parties; and are often involved in enforcing the transaction or resolving disputes. In fact, the existence of the platform is often justified precisely because of this regulatory role, and the efficiencies such platforms are able to create.
While ‘benevolent’ regulators aspire to increase social welfare by promoting the (sometimes conflicting) goals of competition, safety, quality, privacy, access, justice, fairness and distribution, ‘platform regulators’ are driven by a profit-maximizing goal. The paper questions when these goals align, so that platforms equate, and often even outperform government regulators, and when platforms’ incentives are misaligned or powers misused, creating the need for policy intervention. Thus, the laissez-faire assumption of refraining from government intervention in markets takes on a new meaning as it leaves the market to be governed not by the invisible hand of competition, but by the iron grip of the platform, which closely monitors and designs every aspect of its underlying parties’ interactions. Thus, policy regarding such innovative activity should question when regulators should concede their role to private actors and technological platforms.
Rather than asking whether Uber or its drivers should be regulated as a taxi company, or whether Airbnb should be regulated as a hotel, policymakers should instead question if Uber can replace the Department of Transportation and Airbnb the National Tourism Office. While no one claims such companies are as benevolent as governmental regulators, they are driven by something that at times can be far more powerful: profits. Profits, in turn, are influenced by reputation, consumer trust, efficiency and growth, which are often closely aligned with regulatory goals. Additionally, unlike public regulators, these potential private regulators have significant resources, are data-driven and are not limited in their ability to conduct market research and experimentation, enabling them to rapidly adopt and amend policy changes.
On the other hand, policies adopted by these platforms may not address risks or quality concerns that are less salient to the average consumer and therefore have less impact on the platform’s reputation. Furthermore, they may be indifferent to externalities, fairness and distribution concerns, they have limited obligations of transparency and due process, and may have an interest in designing competition between the participants, and between themselves and other platforms, in a manner that increases their market share and profit margins.
When platforms’ profit maximization goals fully align with public welfare, such platforms are expected to be extremely efficient private regulators, often easily surpassing government actors. At other times, platforms can use their power to design the market to their benefit and to the detriment of participants or society, especially if markets consolidate due to network effects.
Platforms are greatly influenced by trust and reputation, which can act as significant forces in aligning platforms’ interests with those of public welfare. Reputation, in turn, is driven by salience and transparency on the platform. This is true for whether users can perceive and gain information about features offered by the platform, as well as for whether externalities of the platform and its social and distributional outcomes are salient to the public as a whole, driving platforms to internalize such considerations. When certain aspects are less salient, either due to consumer behavioral problems, or due to limited information about the platform’s design and actions, those aspects are not expected to drive the platform’s regulatory policy.
But trust and reputation are efficient catalysers only when platforms are subject to competitive pressures, on all sides of the platform. As network effects lead markets to consolidate, and if platforms adopt policies that lock in users, limiting their use of other platforms or off-platform alternatives, their alignment with the public interest is likely to wane. Additionally, competition has to be strong on all sides of the platform, or policies will cater to the interests of one party over the other, so that regulatory arbitrage may lead to a regulatory race to the bottom, inefficiently balancing the interests of one side of the market over others. Thus, regulators should pay close attention to practices that curb competition between platforms, and to those features that are less salient to users, either because of certain cognitive biases, or due to limited transparency by the platform of its practices.
Aluma Zernik is a S.J.D. Candidate at Harvard Law School and a Terence M. Considine Fellow at the John M. Olin Center for Law, Economics and Business.