Is there any limit to online platforms’ discretionary power to terminate accounts or remove content?

The fierce debate over the exercise of discretionary power by platforms to terminate users’ accounts and remove content has primarily focused on free speech ramifications and the constitutional restraints on top-down legal interventions. Driven by profits, platforms seek to ensure that their digital services are aligned both with users’ expectations and with the interests of advertisers, as also underlined by the Francis Haugen’s revelations. Steps taken in response to mounting pressure to tackle disinformation during the Covid-19 health crisis and the events after the US 2020 presidential election are very recent examples.

While suspension and removal decisions by platforms often trigger questions situated in public law, they also raise important challenges to private law. Cutting the livelihood of small businesses, independent creators and political activists, termination and removal decisions may carry irreparable financial and reputational harms. This has been the case of Lewis or Mishiyev which have seen their complaints rejected by US courts.

From a private law perspective, content moderation and account suspension disputes are governed by the contract between the stakeholders in social media, which is typically composed of the Terms of Services (ToS) and community guidelines. Users of digital platforms may consist of small businesses, professional or amateur creators, political activists, and individuals with vested interests in online communication and exchange. Cutting their livelihood, removal and suspension decisions may carry irreparable harms. An indefinite suspension of a user account disconnects users from their followers, thus depriving users of financial rewards or reputational gains for which they have laboured.

Users of social media platforms are important stakeholders in the platform economy. The economic value in social media is generated through the intermingling of interdependent users. Platforms operating in multisided markets harvest data on users and extract revenues from selling users’ profiles for targeted advertising, or other data-driven products and services. Within this economic ecosystem, users play multiple roles. They are both consumers of services supplied by the platforms (under a vertical contract), and also providers of content, supplying added value that shapes the (horizontal) expectations of other users. Users’ content and interactions attract additional users and deepen their engagement, thus broadening the network and lengthening the time spent on social media. Consequently, while social media platforms generate their profits from advertising, it is users who provide the bricks from which the platforms build their business model.

Notwithstanding the interdependency between platforms and users, platforms’ business interests may shift over time, in ways that may not align with users’ interests and expectations, nor with the common goals agreed upon in the contract. This is especially the case when a handful of social media platforms dominate the online conversation, undermining the mitigating power of competitive pressures.

So far, particularly in the US, contractual claims raised by users against platforms in relation to content moderation practices are mostly rejected. Courts simply focus on the explicit ToS, which most often grant platforms with unlimited discretion to remove users' content or terminate their accounts, and dismiss users' allegations.

However, ToS alone do not reflect the real contractual relationships underlying social media. These mutual expectations define additional rights and obligations, beyond those affording platforms with unlimited removal power under the ToS. Interpreting the contractual relationship between platforms and users as establishing bilateral/vertical obligations only hence undermines the true intention of the contracting parties and overlooks the plethora of commitments and obligations created by such contracts to multiple stakeholders.

To overcome this blind spot in current contractual analysis, we offer courts an interpretive framework for addressing contractual claims involving digital platforms. Platforms’ contracts should be interpreted as contractual networks—a complex system of interrelated contracts, enabling coordination without vertical integration. Users in social media platforms, we argue, collaborate in creating the shared economic and social value generated by social media. By framing the contractual relationship between platforms and users as a contractual network, courts are called to go beyond bilateral contracts and look at the complexity characterizing the relationship between users and platforms. Precisely, courts should consider whether exercising the power to remove or suspend accounts meets the contractual expectations of the networks' members and advances the particular network common goals.

This approach to contract interpretation may facilitate a bottom-up check on content moderation via private ordering, thus increasing platforms’ accountability. Specifically, if users could effectively raise contractual claims against platforms and hold them accountable for capricious, biased, or unfair removal decisions, they could pressure platforms to align content moderation policies with the shared interests of the community of users. To that end, contract law could empower users by offering a decentralized and diversified check over the platforms’ content moderation practices. Holding platforms accountable for content moderation practices via private ordering could also facilitate more diversity and exploration, enabling the emergence of different models for moderating digital content and promoting a more pluralist public discourse.

 

Niva Elkin-Koren is a Professor of Law at Tel-Aviv University Faculty of Law and a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University.

Giovanni De Gregorio is a Postdoctoral Researcher working with the Programme in Comparative Media Law and Policy at the Centre for Socio-Legal Studies, University of Oxford. 

Maayan Perel is Assistant Professor at the Netanya Academic College School of Law and a research fellow at the Center for Cyber Law and Policy, University of Haifa.