Simonetta Vezzoso weighs in on the policy debate on algorithmic competition.

In the current competition policy discourse, algorithms are seen as tools that companies can utilize for various purposes, such as enhancing products and services, creating new offerings, customizing to individual consumer interests, optimizing internal processes, reducing transaction costs, acquiring market insights, and implementing real-time repricing. While there is a recognition of the benefits that algorithms bring, concerns loom large over algorithmic collusion, exclusionary conduct, and, to a lesser extent, exploitative behaviours.

To address these concerns, competition authorities have emphasized the importance of ensuring that companies deploying algorithms “teach” them how to comply with competition rules. In a 2017 public intervention, the EU Commissioner in charge of competition policy emphasized that “[w]hat businesses can – and must – do is to ensure antitrust compliance by design”. Six years after that speech, competition authorities have stepped in on several occasions requiring individual companies to amend their algorithmic practices to align with competition law, particularly in cases of exclusionary abuse.1 The EU Google Shopping decision is a prominent example of algorithms being used for self-preferencing abuses of dominant market positions.2 Further instances of self-preferential algorithms violating competition rules have been uncovered, not solely within the EU3 but also internationally.4 The already enforceable EU Digital Markets Act (DMA),5 which imposes specific duties on designated gatekeepers, includes prohibiting self-preferencing in the ranking, as well as associated indexing and crawling, of services and products provided by the gatekeeper itself vis-à-vis services or products of a third party. The gatekeepers must implement transparent, fair, and non-discriminatory conditions for such rankings.

However, the ongoing discussion on algorithmic competition does not typically encompass reflections about the conditions under which these various tools for algorithmic processing are produced, nor the extent to which these tools can generally be considered as inputs necessary to compete on other markets. Thus, for instance, the attention is not centred on encouraging innovation competition between pricing algorithms,6 but rather on the effect that the utilisation of these algorithms exerts on the prices of a variety of products and services, such as a dynamic pricing algorithm used in the provision of revenue management services. In the example, the focus is on the impact of algorithms on consumer prices rather than the competitive landscape for the development of these algorithms.

Evolving Perspectives

Nevertheless, there is an emerging shift in competition authorities’ approach to algorithmic competition as they start paying attention to the underlying technology itself, spearheaded by the UK competition authority. In an initial review published in May 2023, the Competition and Markets Authority (CMA) perceives AI foundation models as a nascent and swiftly expanding technology, which the agency endeavours to help develop in a manner that ensures open, competitive markets along with robust consumer protection. Foundation models represent a category of AI technology, trained on extensive data volumes, that can be tailored to accommodate a broad array of tasks and operations.

The UK competition agency’s interest in this emergent technology is, to a certain extent, unsurprising. Primarily, this interest stems from the CMA’s commitment to protect innovation through open, competitive markets. The UK agency has consistently and unflinchingly demonstrated readiness to intervene in both mature and emerging markets, ensuring that innovative businesses, irrespective of their size, can secure effective market access, which in turn fuels the competitive process.7 Moreover, the CMA can lean upon the extensive, forward-looking  reflections on the benefits and harms of digital technologies conducted collaboratively with fellow regulators within the Digital Regulation Co-operation Forum. Finally, in the 2023-2024 Annual Plan, the agency outlined its goal to prioritise involvement in sectors that provide the biggest potential for improvement in innovation and productivity. These sectors may include more mature markets which seem to exhibit considerable productivity gaps and where competition appears muted, as well as markets that are in their initial stages of development with substantial growth potential. Early intervention during market development can, at times, guarantee that the market evolves in a competitive way. This sort of intervention might manifest in various ways, including advocacy and market studies.

CMA’s CEO Sarah Cardell noted in a recent speech that generative AI is undoubtedly one such to-be-prioritised area, as “[i]t is increasingly clear that AI has the potential to transform the way businesses compete and act as a driver for substantive economic growth. And we know it has huge potential to offer enormous benefits to consumers”. The initial review of this emerging, rapidly scaling technology seeks to comprehend the trajectory of the markets for foundation models and their use, along with the ensuing opportunities and risks for competition and consumer protection. It will also pinpoint principles to guide the evolution of these markets aimed at ensuring that the vibrant innovation characterizing the current emerging phase remains robust, and the ensuing benefits persistently flow for people, businesses, and the economy.

With respect to understanding the present functioning of competition in the development of foundation models and pondering potential improvements as the sector continues to grow, the UK authority intends to investigate several factors. These include (1) potential entry barriers such as accessing necessary data, computational resources, talent, and funding; (2) how foundation models could disrupt or consolidate the position of the largest firms; and (3) the distribution of value within these systems. The CMA is particularly keen to understand whether there are intrinsic characteristics of the foundation model development and deployment, such as economies of scale, that would tend towards centralisation, consolidation and integration.

The second significant theme of this initial review is the potential impact of foundation models on competition in other markets. To a certain degree, this appears to be in line with the already established assessment framework developed by algorithmic competition policy, which concentrates on the effects of particular AI technologies on other markets. For instance, when it comes to repricing algorithms, competition authorities are eager to evaluate the effects of firms’ employment of this technology on (price) competition in those markets where the repriced products or services are offered. The crucial distinction, however, is that, according to the CMA, foundation models are expected to emerge as an input necessary for competition on other markets, such as search and productivity software. One potential scenario is that products and services leveraging the capabilities of foundation models might evolve into ecosystems which could be more open or closed. From the perspective of competition policy, an already legitimate concern is about the possibility of access to these models and capabilities becoming unduly restricted or controlled by a few large private companies facing insufficient competitive constraint. If this situation materialises, competition and innovation that would benefit people, businesses and the economy could be thwarted.

Accompanying the Technological Developments

The findings from the CMA’s initial review, anticipated to be released in early September 2023, will certainly attract close scrutiny. What is readily apparent, however, is that a significant transition is underway in the manner competition authorities are addressing AI technologies. As a joint blog post by the FTC’s Bureau of Competition and Office of Technology makes clear the focus is shifting towards “ensuring open and fair competition, including at key inflection points as technologies develop”. This shift will require sufficient time to be thoroughly examined and comprehended, considering all potential implications, including regarding required expertise, collaboration with other regulators, effective advocacy and meaningful ex-ante and ex-post enforcement. While more analysis is required, it is important to acknowledge and endorse the heightened attention that competition authorities are placing on innovation competition on the markets for AI technologies and the sectors where this technology is used by “downstream” companies.

However, in a time when AI has garnered substantial global political attention due to the significant risks these technologies might pose, the focus on comprehending the competitive dynamics involved in developing foundation models could be perceived as excessively reductionist or as diverting the limited attention of regulatory bodies away from more critical concerns. This criticism would undoubtedly be justified if “unsupervised” innovation competition in AI development ends up encouraging race-to-the-bottom dynamics resulting in foundation models and their uses that are detrimental to people, businesses and the economy. However, it can be contended that the CMA is well aware of this risk, which to some extent might mirror dynamics observed in the online advertising sector, which have ended up promoting “innovation” in surveillance techniques. Regarding foundation models, the CMA clearly stresses the importance of early intervention and adopting a forward-looking approach regarding potentially transformative digital technologies, including the formulation of “principles that will best guide the ongoing development of foundation models for the public good.”Indeed, the issue of toxic innovation in markets for digital technologies is a serious matter that no regulator can afford to disregard.

In conclusion, with the ongoing development and widespread adoption of AI across diverse sectors, a shift in the approach of competition authorities towards AI-driven competition is not only on the horizon but also essential. This shift in perspective has already commenced, as exemplified by the CMA’s planned review of AI foundation models. This expanded viewpoint will complement the existing policy discourse on algorithmic competition and serve as a vital factor in shaping future antitrust and regulatory policies in this space.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.

[1] Despite the substantial attention garnered by the topic of algorithmic collusion, actual cases where such behaviours have been assessed have thus far been few.

[2] The decision has been upheld by the General Court of the European Union (Case T-612/17 Google Shopping,

[3] Case AT.40703 Amazon – Buy Box Decision of 20 December 2022 (preferential treatment of Amazon’s retail business or of the sellers that used Amazon’s logistics and delivery services); Case AT.40670 Google – Adtech and Data-related practices, Commission sends Statement of Objections to Google over abusive practices in online advertising technology, 14 June 2023 (preferential treatment of Google’s online display advertising technology services)

[4] See for instance the Korean Naver Shopping case by the Korea Fair Trade Commission, Decision No. 2021-027, confirmed by the Seoul High Court in December 2022; see also the antitrust suit filed on January 2023 by the US Department of Justice against Google for monopolizing multiple digital advertising technology products, available at

[5] Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act), OJ L 265, 12.10.2022, p. 1–66.

[6] Pricing algorithms are based on reinforcement learning, which is a form of machine learning in which the algorithm uses a trial-and-error approach: input values are changed observing the outcome of a reward function aiming to maximise a reward.

[7] See CMA, Illumina/PacBio abandon merger, 3 January 2020.