Home The Role of the State Antitrust and Competition The Impact of Algorithms on Competition and Competition Law

The Impact of Algorithms on Competition and Competition Law

0
gmast3r/Getty Images

Pricing algorithms have shown the potential to both help and harm competition. Our new series, Computational Antitrust, explores how algorithms can abet tacit collusion, allowing competitors to set prices at anticompetitive levels without active human assistance. This series will introduce the legal and economic theory behind algorithmic collusion, explore its theoretical and empirical risks for competition, and discuss what regulators can and should do to protect competition and consumers from this rapidly advancing technology.

Antonio Capobianco, the deputy head of the OECD Competition Division and one of the authors of the 2023 OECD report on algorithmic competition and collusion, begins our series with an explanation of the risks that algorithms and artificial intelligence pose to competition and how regulators can approach the changing competition paradigm. Return next Tuesday for our next addition.


No one would question that a price fixed by competitors in a smoke-filled room is against the law. But what about prices fixed by algorithms? Are those to be condemned as well? Are servers and clouds indeed replacing smoke-filled rooms? Can algorithms exclude competitors or exploit consumers? Are competition authorities and regulators well-equipped to deal with the possible competition concerns raised by algorithms and artificial intelligence (AI)? 

This article explores some of the questions that the increasing dissemination and use of algorithms are generating in the antitrust world, including the challenges for antitrust enforcement and whether the current antitrust laws are fit to address them.

Algorithms: the dawn of a new golden era?

There is no doubt that the advent of algorithms and AI has changed our lives in a way that was unimaginable only a few years back, and we have still not yet seen the full impact that these technologies will have. Algorithms have revolutionized the way we make decisions, from travelling to purchasing to collecting and processing information. Predictive analytics is at the core of weather forecasting, traffic and financial management, advances in medical care, and much more.

Like us, businesses are integrating algorithms and AI in the way they make business decisions. They help firms streamline business processes, improve production and distribution, make existing products better, or develop new ones. And they do all of this in a matter of seconds, ensuring optimal business decisions for which humans expended so much more time and effort before.

Algorithms: a double-edged sword? 

Algorithms have undoubtedly benefited society, but many have started to worry about their negative effects, including for consumers. Critics ask if algorithms and AI are tailoring how we access and internalize information to influence our subsequent choices, not only as consumers but as citizens and as voters. 

Competition authorities around the world have naturally focused on how firms, especially competing ones, can use AI and pricing algorithms to undermine competition. To date, antitrust authorities have taken little action against firms using algorithms to facilitate anticompetitive practices. However, many studies have explored algorithmic theories of harm to show how competing firms can use algorithms to successfully achieve collusive outcomes (coordinated effects) or allow dominant firms to engage in exclusionary or exploitative behaviors (unilateral effects).

Algorithms: how common are they for pricing decisions?

The first relevant question is whether we are looking at a fringe phenomenon or a widespread concern that requires government attention. The short answer is that there is no comprehensive data of firms using algorithms and AI for pricing purposes. Competition authorities, especially in Europe, have surveyed firms that have a strong online footprint in their respective jurisdiction, but these studies are too few and based on relatively small samples, and thus provide inconclusive evidence on the prevalence of pricing algorithms. That said, the studies do suggest that: (i) a minority of firms use algorithms to monitor competitor’s prices (but that is a majority of firms with an online presence); (ii) of which, most adjust their prices manually or use a dynamic pricing algorithm for price recommendations, while only a small proportion use an algorithm to automatically update their prices; and (iii) there does not seem to be much evidence of personalized pricing (i.e., tailored pricing based on the personal characteristics of the consumer). Separately, there is evidence to suggest that the use of pricing algorithms is rapidly increasing.

Algorithms: risks for collusion and beyond

Concerns with pricing algorithms rest on their ability to facilitate anticompetitive price coordination (collusion). One of the shortcomings of traditional, explicit collusion, is that a colluding firm can reduce its price below the agreed-upon fixed price, thus gaining market share over its competitors. By accessing available pricing data, algorithms can rapidly detect and respond to pricing deviations, reducing the incentive for firms to cut their prices and making explicit collusion between firms more stable. Algorithms can reinforce the effectiveness of “hub and spoke” arrangements that use the same third-party pricing software to facilitate price alignments. Finally, the most sophisticated algorithms can use public information to self-learn and autonomously achieve tacit collusive outcomes with little to no human instruction.

The debate thus far on the implications of algorithms has mostly focused on algorithmic collusion. However, dominant firms can also use algorithms to engage more effectively in abusive conduct. These potential risks concern algorithmic exclusionary conduct (such as self-preferencing; predatory pricing; rebates; and tying and bundling) and algorithmic exploitative conduct (such as excessive pricing). Competition authorities have rarely investigated either algorithmic exclusionary or exploitative conduct with the exception of self-preferencing: the European Commission Google Shopping case provides an example of how a dominant firm can use self-preferencing as a leveraging strategy to impair third-party competitors’ access to the market. 

Thinking of solutions within and outside the competition box

Some antitrust thinkers believe that algorithms and AI require us to revisit the assumptions of how firms and consumers behave that are the foundation of most competition law. Generally, antitrust agencies believe that most algorithmic theories of harm can be captured by competition law. The one possible exception is autonomous algorithmic collusion, where some argue that the current focus of the law on explicit collusion and communication between competitors may not be sufficient in cases where humans are not directly involved, pointing at a potential enforcement gap.

Governments have been looking beyond traditional antitrust enforcement for effective responses to AI and pricing algorithm threats to competition. These include ex-ante regulatory interventions aimed at designing algorithms in a way to avoid tacit collusive outcomes and adopting regulatory measures to reduce prices to competitive levels if they increase to collusive outcomes following the introduction of pricing algorithms. Another regulatory proposal has the government requesting a supplier to adopt a disruptive algorithm, effectively acting as a maverick and charging a lower potentially competitive price, creating noise on the supply-side, and disrupting any algorithmic coordination. Finally, a regulator could enforce a time-lag in pricing algorithms’ response to market conditions or freeze the price of one supplier in each period (i.e., the lowest priced supplier), which would incentivize other suppliers to price below this firm to capture extra capacity. Some also suggest the use of innovative market-based measures, such as the promotion of algorithmic use by consumers to disrupt algorithmic collusive schemes, e.g., by aggregating demand and increasing incentives for cartel deviations.

The next frontier—algorithmic auditing and sandboxes

Investigating how an algorithm functions to assess its competitive risk is complex and challenging for regulators, given the sophistication of algorithms. But it is not impossible, especially if regulators are granted access to the algorithm and/or the underlying data set. Applying investigative techniques on algorithms can be highly technical, requiring regulators to be equipped with specialized skills and expertise. A number of competition authorities have already started setting up data units by hiring data scientists and technologists to assist investigative units in digital enforcement cases. 

While some competition authorities are already using these skills to reverse-engineer and understand companies’ algorithms, it is through algorithmic auditing that regulators can really unveil the objectives, workings, and outcomes of potentially harmful algorithms. Algorithmic auditing can take different forms, such as reviewing governance documentation, testing an algorithm’s outputs, or inspecting its inner workings. There are several possible approaches to algorithm audits, with some promising developments coming from the explainable AI literature. While not a silver bullet, particularly given the complex nature of some algorithms, there have been considerable advances in techniques that enable regulators to understand why and how algorithms make decisions.

If access to the algorithms is not possible for auditing purposes, regulators can also turn to regulatory sandboxes, enabling firms to test new algorithms and address any potential harms in a safe regulatory environment. These sandboxes often include mechanisms intended to ensure overarching regulatory objectives, including competition or consumer protection, and are typically organized and administered on a case-by-case basis by the relevant regulatory authorities. Regulatory sandboxes have emerged in a range of sectors across, notably in finance but also in health, transport, legal services, aviation, and energy.

Are algorithms shifting well-established competition paradigms?

Algorithms and AI are increasingly permeating our lives, and they can have considerable procompetitive and efficiency-enhancing effects that improve living standards. However, algorithms can also be used by firms to harm competition. This includes coordinated conduct, through algorithmic collusion, but also unilateral conduct to exclude competitors and exploit consumers directly. Competition authorities are increasingly confronted with cases that involve algorithms, and this trend will only increase in the future. Rather than treat these complex algorithms as impenetrable boxes, authorities should continue investing in knowledge and skills to understand how they work. While competition law frameworks are already mostly prepared to address the challenges raised by algorithms and AI, new investigative tools (such as algorithmic auditing) and new detection tools (such as algorithmic sandboxes) will help competition authorities and regulators remain ahead of the curve in a world where algorithms and AI will continue to proliferate.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.

Exit mobile version