Michal Gal discusses the regulatory hurdles to deal with the impacts of algorithmic price collusion. In the meantime, she says, market fixes include algorithmic consumers and platform nudges to mitigate price coordination.
Axel Gautier, Ashwin Ittoo, and Pieter van Cleynenbreugel write that the practice of pricing algorithms tacitly colluding remains theoretical for now, and technological obstacles render it very unlikely in the short term. However, regulators must still prepare for a future in which artificial intelligence achieves the necessary sophistication to collude.
Oliver Budzinski and Victoriia Noskova discuss in their publication why merger simulations are not more widely used by competition authorities and in front of the courts to predict future effects of mergers despite advancements in availability of data, AI and computational power. The institutional setting is an essential factor for computational antitrust tools to be accepted and applied by competition authorities.
Maurice Stucke explains three policy approaches to algorithmic collusion and discrimination, and makes the case for a broader ecosystem approach that addresses not only the shortcomings of current antitrust law and merger review, but extends beyond them for a comprehensive policy response to the many risks associated with artificial intelligence.
Daryl Lim explains that while there is some evidence that pricing algorithms facilitate collusion, there are reasons to be skeptical of their effectiveness. Lim advocates for compliance by design: firms should create algorithms that don’t collude on price, comply with reporting their algorithms transparently, and know that they will be held responsible for the actions the algorithm takes.
Companies increasingly use sophisticated computational tools to compete, particularly in digital markets. Giovanna Massarotto outlines how antitrust agencies must similarly modernize and adopt advanced technologies to address complex antitrust enforcement challenges effectively and remain relevant.
Cary Coglianese lays out the potential, and the considerations, for antitrust regulators to use machine learning and artificial intelligence algorithms.
While the development of artificial intelligence has led to efficient business strategies, such as dynamic pricing, this new technology is vulnerable to collusion and consumer harm when companies share the same software through a central platform. Gabriele Bortolotti highlights the importance of antitrust enforcement in this domain for the second article in our series, using as a case study the RealPage class action lawsuit in the Seattle housing market.
The draft Merger Guidelines largely replace the consumer welfare standard of the Chicago School with the lessening of competition principle found in the 1914 Clayton Act. This shift would enable the Federal Trade Commission and Department of Justice Antitrust Division to utilize the full extent of modern economics to respond to rising concentration and its harmful effects, writes John Kwoka.
Joshua Gray and Cristian Santesteban argue that the Federal Trade Commission's focus in Meta-Within and Microsoft-Activision on narrow markets like VR fitness apps and consoles missed the boat on the real competition issue: the threat to future competition in nascent markets like VR platforms and cloud gaming.
Antitrust debates have largely ignored questions about the relationship between market power and productivity, and scholars have provided little guidance on the issue due to data limitations. However, data is plentiful on the hospital industry for both market power and operating costs and productivity, and researchers need to take advantage, writes David Ennis.
Meta has silenced news organizations’ social media accounts in response to Canada’s Online News Act, a law not yet in effect. Josh Braun describes the reasoning behind such legislation, its potential flaws, and how Meta, particularly Facebook, has turned the Canadian wildfire crisis into a regulatory pressure campaign.