The digitalisation of production and distribution processes means that the (‘first-copy’) costs to develop and launch a product are preponderant in comparison to the following costs to actually produce and sell it. A classic example of this type of front-loaded cost structure is software and media content in general; but this feature is increasingly common across the economy, as the process of digitalisation becomes pervasive. When firms face high (fixed) costs to initially develop and launch a product, followed by low (marginal) costs thereafter, financial viability rests on the ability to set a price sufficiently above the marginal costs to cover the fixed costs incurred.
By and large, the sustainability of this profit margin is undermined when there is a high proportion of ‘shoppers’, that is, consumers with a propensity to shop around - as opposed to passive consumers failing to identify cheaper alternatives, typically due to relatively high search and switching costs. Indeed, low search and switching costs on the the demand-side trigger intense price-rivalry on the supply-side. In this respect, the adoption of digital solutions allowing consumers to search products and compare prices more easily is bound over time to increase the share of ‘shoppers’. This is particularly the case of products for which consumers exhibit a low preference for variety or, more generally, with respect to retailers competing over similar products.
Under these circumstances, it would seem perfectly justifiable for firms to adopt solutions aimed at averting the risk of price undercutting and the sharp fall in sales it causes. One obvious option would be to rely on automated simple pricing algorithms, which continuously monitor rivals’ prices, promptly detecting and immediately matching lower ones. Nowadays, this task is arguably eased by the preponderance of e-commerce/shopping distribution channels. This is particularly the case when transactions take place in e-marketplaces, where sellers and buyers are matched over a standardised and common platform.
However, when commonly adopted by rival firms, this basic type of pricing algorithm can facilitate joint monopolisation. Since rivals can immediately detect and match lower prices, firms have no incentive to undercut prices in the first place. Not only is the attempt to increase sales at the expense of rivals bound to fail, but it may also backfire as the price cut is applied to existing sales. In addition, each firm can safely set its monopolistic price to begin with, knowing that in the worst-case scenario it might have to be adjusted downwards (ie, to match a lower price set by a rival firm) at no loss in sales.
This risk of severe anticompetitive effects has obviously drawn antitrust scrutiny. For example, European Competition Commissioner Margrethe Vestager admonished that ‘what businesses can – and must – do is to ensure antitrust compliance by design. That means pricing algorithms need to be built in a way that doesn’t allow them to collude.’
In an article published in the Journal of European Competition Law & Practice, I argue that such an approach would amount to a reversal of the burden of proof under existing case law, in particular on the prohibition of coordinated facilitating practices. Indeed, following the current position of the Court, in order to establish the existence of a collusive agreement solely on the basis of circumstantial evidence (such as the uniform adoption of an alleged facilitating practice), and without proof of reciprocal contact through direct or indirect communication, the existence of any plausible and legitimate alternative explanation must be ruled out. However, Commissioner Westager’s line of argument is that firms should refrain from adopting pricing algorithms to the extent that this may facilitate tacit collusion, that is, regardless of whether there exists an alternative plausible and legitimate justification.
What makes this more intriguing is that, in theory, unless the adoption of price-matching algorithms is unanimous across rival firms, joint monopolisation is unsustainable. This is because even a single non-adopting firm would end up setting a competitive price, for fear of being exposed to price undercutting in case it set a monopolistic price (that is, trusting rival firms to do the same). Hence, the uniform adoption of pricing algorithms is necessary to sustain both joint monopolisation and a price-cost-mark-up, necessary to maintain financial viability. Ultimately, enforcement would have to be based on the finding that price levels prevailing in the market are excessive, in other words that they are above the threshold necessary to secure a sustainable business model.
Arguably, this enforcement approach would make it more difficult to establish the liability of the platform operating the e-marketplace, where the pricing algorithms are deployed. Indeed, the platform operator would hardly be able to tell whether prices are excessive in the first place, since it does not have access to sellers’ underlying costs. However, for the sake of argument, it is interesting to consider whether the platform operator would have, first, the incentive and, second, the ability to prevent the use of pricing algorithms by hosted sellers.
Regarding the former, it is rather intuitive to see that the interest of a monopolistic platform operator would be aligned with that of hosted sellers. Indeed, by allowing sellers to collude, the platform operator might be able to extract higher revenue from listing fees. However, things might differ in case of competition betweem platforms. Indeed, a rival platform may succeed in cornering the buyer-side by outspokenly preventing the use of pricing algorithms, thus enticing buyers with the promise of resulting lower prices.
Therefore, in both cases, the lack of corrective intervention by the platform operator may be construed as an exploitative conduct in breach of competition law. Of course, this would depend on whether the platform operator has the ability to prevent the use of pricing algorithms. For example, the platform operator could constrain sellers to set prices only once within a predefined period (say, every two days) by submitting their prices privately to the platform owner in advance. A less restrictive alternative could consist in imposing a set delay/latency (say, 12 h) for a price change to go live. The idea is to impose some sort of inefficiency, in order to reinstitute the incentives to undercut rivals’ prices. Nevertheless, it is important to stress that this enforcement strategy would be conditioned upon the platform operator knowing that prevailing price levels were persistently set well above the relevant cost benchmark.
Paolo Siciliani is a senior technical specialist at the Bank of England’s Prudential Regulation Authority, an independent expert in competition and economics at the Bar Standards Board for England and Wales, and a visiting lecturer at UCL Faculty of Laws.