Faculty of law blogs / UNIVERSITY OF OXFORD

We are witnessing the growth of online markets and a change in our purchasing patterns. People are opting for the convenience of online shopping. Advances in technology have seemingly increased our choices and opened markets to competition. We get more of what we desire at better prices and lower quality.  

While the technological innovations have benefited us, we explore in Virtual Competition several emerging threats, namely algorithm-driven collusion, behavioural discrimination and abuses by dominant super-platforms.  

One interesting characteristic of an online dystopia is its stealth. Granted in the brick-and-mortar world, we seldom knew when manufacturers colluded. But we did know of cruder forms of price discrimination (e.g. adults paying more than children and seniors), and we put up with monopolies’ inferior service and high prices. In the algorithm-driven world, we will often be unaware of camouflaged abuses. We will unlikely know when the digitised hand displaces the invisible hand of competition. What appears competitive may be nothing more than a controlled and manipulated personalised environment – much like in the movie The Truman Show where ignorance is bliss.

But as we migrate from brick-and-mortar shops to online commerce, the anticompetitive harm will not be solely economic. It goes beyond our pocketbooks. Its toll will likely be on our privacy, well-being and democracy.

To see why, consider the emerging frontier of digital personal assistants. These AI-driven platforms will connect and control the smart technologies in our home, play the music we want, and respond to information requests. In 2016, Google showed a video of a suburban family going through its morning wakeup routine: 'The dad made French press coffee while telling Google to turn on the lights and start playing music in his kids’ rooms. The mom asked if ‘my package’ had shipped. It did, Google said. The daughter asked for help with her Spanish homework.' As the artificial intelligence and communication interfaces advance, digital personal assistants will offer an unparalleled personalized experience. Our time will be too important to worry over life’s little details. As the digital butler seamlessly provides more of what interests us and less of what doesn’t, we will grow to like and trust it. Communicating in our preferred language, our assistant will develop the ability to anticipate and fulfill our needs and requests. They can do so, based on our connections, data profile, behavior, and so forth.

The digital assistants have the potential to usurp the current super-platforms, namely Google, Apple, Facebook and Amazon. Not surprisingly, each of these companies is now seeking to become our digital personal assistant. The winner will become our primary interface.

The existence of a handful of leading online gatekeepers provides fertile grounds for manipulation of our economic decisions. Their power is not solely linked to the number of users. Even more so, it relies on their ubiquity, namely their ability to collect personal data about us. The more data they have about us, the better their algorithms can predict our wants and needs. This includes targeting us with (and inducing us to buy) goods and services, and better predicting the maximum amount we are willing to pay.

But how can this economic power translate into political power? From social networks, such as Facebook, to leading search engines, these gatekeepers become an essential and integrated part of our daily routine. We are increasingly relying on the super-platforms for news and entertainment. Currently, the super-platforms do not report the news. But many people rely on the super-platforms’ algorithms to find news of interest. One 2015 study found that 61 percent of Millennials in the United States (those born between 1981 and 1996) were 'getting political news on Facebook in a given week.' This was a much larger percentage than any other news source. A 2016 study found that Facebook 'sends by far the most mobile readers to news sites of any social media sites'—82 percent of the social traffic to longer news stories and 84 percent of the social traffic to shorter news articles. Amazon sells in the U.S. more books than any other retailer, and its algorithms in directing our attention to particular books can affect what we read. 

We are thus increasingly relying on these gatekeepers’ and their algorithms’ view of the world. One article recently asked whether the propagation of fake news before the 2016 U.S. election was an antitrust problem. The fake news problem arose after Facebook implemented product changes that deterred its users from clicking on external news links, and to rely instead on its Instant Articles. Granted, Facebook did not author the fake news stories. But it can manipulate what its 1.8 billion users can easily see (and not see).

Take another example. In 2012 Facebook conducted a study in which it manipulated some users’ news feeds to examine how people transmit positive and negative emotions to others. When Facebook surreptitiously reduced positive content in the News Feed, the users’ own status updates were also less positive; when Facebook surreptitiously reduced the friends’ negative content in its News Feed, the users were less negative themselves.  

The composition and order of news feed can therefore affect our inclinations. With 61 percent of Millennials relying on the social network to receive their news, the power of the network becomes clear. Normally, with power comes responsibility. That is indeed the case in EU competition law when a firm dominates markets for goods and services. And yet, when it comes to the market of ideas, the powerful platforms often try to side-step their responsibility. They present the algorithm as independent and objective, catering to users’ wishes and free from corporate agenda or input. That ‘clean image’ is used to estrange any argument for greater social responsibility.

Similar distortion may be found beyond social networks. Consider, for example, the use of search engines and their ability to manipulate the composition and order of online search results. These services which operate in two-sided markets have been designing their interfaces to maximise their income from advertisers and sellers. To do so they may change the layout of the result page, the order of results and the access of some providers – all to increase profits. Similarly, they may do so to affect public opinion by promoting some stories over others. Take, for example, the way search engines provide users with featured results. One study based on five experiments in two countries found that a dominant search engine, like Google, can shift the voting preferences of undecided voters by 20% or more through biased search rankings, without the citizens being aware of the manipulation.

Further, search engines may be used to affect public views through other means, such as its predicted search function. As The Guardian reported, Google’s autosuggest may be used to propagate biased views against minorities. Select groups can manipulate the algorithm to amplify their message. They may also use a more traditional avenue – and simply pay the search engine for preferential listing. In a world where many users view their search results as unbiased, camouflaged manipulation becomes a powerful and dangerous tool.

This vulnerability will likely intensify. As part of our quest for free and fast, we rely on our mobile devices, social and shopping networks, and search engines to learn about our world and engage with it.  Already we trust a few gatekeepers to maintain a non-distorted virtual reality. As we increasingly depend on our digital personal assistant such as Siri, Alexa, Echo, M, and others for the news and entertainment we get, and the books we read, the gatekeeper’s power to influence our views will increase. These interfaces, mostly voice activated, will further distance us from personally searching for information. In a world with a few key gatekeepers, economic power may easily translate to political power – be it through payment by third parties or as a result of the platform itself opting to advance one agenda over another. The marketplace of ideas, just like online markets for goods and services, may be manipulated.   

Worryingly, we may lack the ability to detect whether the marketplace has been distorted and through which means. We may not fully appreciate the gatekeeper’s potential devious side and underestimate its ability to affect our world view. Our autonomy and free will may well exist but they might do so within a wider ‘Truman Show’. We may be convinced in our righteousness and impressed by our own activism, not appreciating how our own actions have been orchestrated by invisible powers.

With such prospects, as intellectual and regulatory capture intensifies, our democracy may well become an illusion.

Ariel Ezrachi is the Slaughter and May Professor of Competition Law at the University of Oxford, and Maurice Stucke is a Professor of Law at the University of Tennessee (Knoxville). This post was originally published here.

Share

With the support of