Maurice Stucke explains three policy approaches to algorithmic collusion and discrimination, and makes the case for a broader ecosystem approach that addresses not only the shortcomings of current antitrust law and merger review, but extends beyond them for a comprehensive policy response to the many risks associated with artificial intelligence.


After we raised the risks of algorithmic collusion and discrimination in our earlier article and book, policymakers began taking notice. While policymakers have acknowledged these risks, their understanding varies, leading to potential gaps in policy responses. Here we’ll examine three policy approaches to tackle these risks. As we’ll see, the overarching concern lies not only in higher prices but also in the fear of an AI-dominated bureaucracy that hinders human autonomy and well-being.

I. THE NARROW APPROACH

Here policymakers and antitrust agencies evaluate whether the conduct facilitated by algorithms violates existing antitrust laws. If not, does the jurisdiction have any legal basis to intervene? If the conduct is potentially illegal, the next issue is whether enforcers can readily detect a violation, prove it in court, and effectively remedy the challenged behavior.

For example, for the hub-and-spoke scenario, the challenge lies in determining whether the agreement is purely vertical or also horizontal. Consider the ridesharing app Uber, which sets fares for drivers. The question arises whether this agreement is limited to the hub (Uber) and each driver individually or extends horizontally among the drivers themselves (acting through the hub). This distinction is critical in jurisdictions, like the United States, as it determines the legal standard under which the agreement is evaluated. In the United States, purely vertical agreements are reviewed under the rule of reason standard, which as the Supreme Court recognized, entails an “elaborate inquiry” that “produces notoriously high litigation costs and unpredictable results.”

Tacit algorithmic collusion, on the other hand, does not involve any agreement. Instead, pricing algorithms continuously monitor and respond to competitors’ online pricing, thereby inhibiting rivals from gaining sales through discounts. Detecting and challenging such collusion is challenging, as conscious parallelism alone does not violate the Sherman Act. There must be proof of an agreement. The FTC perhaps could challenge it as an unfair method of competition under its new guidelines. Whether the courts would agree (without evidence of anticompetitive intent) is another matter.

Besides collusion, algorithms can also facilitate behavioral discrimination, basically getting consumers to buy things they might not have wanted at the highest price they are willing to pay. This goes beyond price discrimination, as the aim here is to manipulate behavior.

Although policymakers have expressed concern over behavioral discrimination, combating it is challenging. One needs to target the surveillance economy and behavioral advertising, which no jurisdiction has succeeded to date. Because behavioral advertising skews incentives (where data is collected about us, but not for us), increasing competition is not the answer. As TikTok’s growth reflects, competition becomes a race to the bottom to addict and manipulate us. Thus, to realign incentives, traditional competition-enhancing measures will often be insufficient. Policymakers must incorporate privacy and consumer protection measures to realign incentives and combat manipulative practices.

II. SLIGHTLY BROADER APPROACH

This approach acknowledges the challenges in detecting and prosecuting algorithmic collusion. To prevent these risks, policymakers consider other available tools, such as the merger review process. By employing data scientists and technologists, the agency can identify markets susceptible to algorithmic collusion and evaluate whether proposed mergers would facilitate such behavior.

For assessing the likelihood of tacit algorithmic collusion post-merger, agencies would consider, among other things, whether:

  • the relevant market is concentrated and involves homogenous products;
  • the pricing algorithms can monitor, to a sufficient degree, the competitors’ pricing, other key terms of sale, and any deviations from the current equilibrium;
  • conscious parallelism could be facilitated and stabilized to the extent (i) the rivals’ reactions are predictable, or (ii) through repeated interactions, the rivals’ pricing algorithms “could come to ‘decode’ each other, thus allowing each one to better anticipate the other’s reaction”;
  • once deviation (e.g., discounting) is detected, a credible deterrent mechanism exists (such as the rival firms’ algorithms promptly discounting as well); and
  • the collusion would be quickly disrupted, whether by current and future maverick competitors (or customers).

Based on its findings, the agency may not only block the merger but lower its threshold for reviewing mergers in other markets susceptible to tacit algorithmic collusion. Thus, in more markets, the agency might start looking at 4-to-3, 5-to-4, or 6-to-5 mergers, and reconsider their approach to conglomerate mergers when tacit collusion can be facilitated by multimarket contacts.

Similarly, to prevent hub-and-spoke algorithmic collusion, agencies would scrutinize mergers involving:

  • leading vendors of price optimization algorithms (here the concern is not solely the potential increase in prices to the companies using the vendors’ services, but the softening of competition in downstream markets);
  • acquisitions of mavericks by any of the leading firms that all use the same vendor’s price optimization algorithm; or
  • mergers between competing hubs (such as between Uber and Lyft).

In assessing the risk of hub-and-spoke algorithmic collusion post-merger, agencies would consider, among other things, whether:

  • many competitors in that market outsource their pricing to a third-party vendor;
  • the leading pricing algorithm will likely improve post-merger, as the algorithm will have more data and more opportunities to experiment with higher prices and refine its pricing strategies for each client; and
  • the algorithm, programmed to increase the profits of the vendor’s clients, will likely tamper with market prices.

By leveraging merger law, agencies can prevent potential harm. However, their success depends on judicial interpretation of the legal standard. Section 7 of the Clayton Act, for example, reaches “incipient” harms. What this means, as the Supreme Court “ha[s] observed many times,” is that the merger law is “a prophylactic measure, intended primarily to arrest apprehended consequences of intercorporate relationships before those relationships could work their evil.” Given the Act’s incipiency standard, the Supreme Court stated that the merger law “can deal only with probabilities, not with certainties” and that a more onerous legal burden would contradict “the congressional policy of thwarting [anticompetitive mergers] in their incipiency.” Consequently, the agency needs to show only “an appreciable danger” of algorithmic collusion post-merger; not that prices will increase post-merger.

Courts must recognize that the purpose of merger law is to address potential harms, rather than certainties, and agencies must present evidence within this framework. The problem is when courts disagree. Some, instead, view themselves as “fortune tellers” charged with predicting what will happen post-merger. This soothsaying invites the “battalions of the most skilled and highest-paid attorneys in the nation” to “enlist the services of other professionals — engineers, economists, business executives, academics” to “render expert opinions regarding the potential procompetitive or anticompetitive effects of the transaction.” Judges are not innately gifted fortune tellers. Nor are they charged with this function under the law. But one thing is predictable: the agencies will likely lose these merger challenges, if the court requires them to prove, contrary to congressional intent, that prices will increase post-merger.

III. A BROADER ECOSYSTEM APPROACH

The concerns underlying algorithmic collusion and discrimination go beyond economic implications. Policymakers must recognize the broader risks associated with AI and algorithmic decision-making. These risks extend to the potential dehumanization and loss of human autonomy as AI assumes critical functions in society.

Consider, for example, the Center for AI Safety’s statement signed by numerous academics and AI executives: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Why the concern? The Biden administration’s Blueprint for an AI Bill of Rights identifies many risks involving AI, including examples where it has already harmed individuals. Many of the Blueprint’s examples reflect the heightened risks of an AI-dominated bureaucracy. It can be hard to deal with a bureaucracy today, whether seeking a satisfying explanation or remedy. This applies to corporate bureaucracies (such as understanding why the dominant platform delisted an app or product) or government bureaucracies (such as why the agency terminated or never approved a benefit). But the hope is that someone deep in that organization can explain the action and remedy any error.

AI-dominated bureaucracies pose greater challenges when the decision-making becomes more opaque and less accountable. As AI assumes more critical functions, and as the factors considered in reaching that decision increase in complexity, it will be harder to decipher why the algorithm did what it did and obtain relief within the bureaucracy. As the AI-dominated bureaucracy becomes harder to avoid, it becomes more dehumanizing.

Consider one example from the Blueprint: an insurer collects data from a person’s social media presence in deciding what life insurance rates to apply. Now consider the sensitive personal data that AI can also consider in many domains where individuals cannot effectively opt out, such as “health, employment, education, criminal justice, and personal finance.” Suppose the insurer’s algorithm considers and weighs hundreds, if not thousands, of factors. Suppose the company cannot explain exactly how its algorithm arrived at the quoted rate, but it is less concerned given that the AI-generated rates are increasing profits. Now imagine that other insurers, to increase margins, also use AI. What option does the consumer have? Except in the more egregious cases, consumers cannot decipher exactly how the algorithm arrived at the price, identify any spurious correlations, or show how the algorithm improperly used their sensitive personal data.

The implications of AI are, of course, broader. We might not even know if an algorithm was responsible for passing over our resume, denying our client bail, or cutting off a disabled person from Medicaid-funded home healthcare assistance.

Thus, AI, for our purposes, raises at least three overarching concerns.

First is that few people are actually considering the potential broad-scale risks of AI. As one 2023 report observed, “Despite nearly USD 100 billion of private investment in AI in 2022 alone, it is estimated that only about 100 full-time researchers worldwide are specifically working to ensure AI is safe and properly aligned with human values and intentions.”

Second is the legal void over the design, use, and deployment of AI to prevent harm. The Biden administration’s Blueprint, for example, offers five principles to effectively guide this process (promoting safe and effective systems; algorithmic discrimination protections; protecting data privacy; notice and explanation – so that we know when an automated system is being used and understand how and why it contributes to outcomes that impact us; and human alternatives, consideration, and fallback – so that we can opt out, where appropriate, and have access to a person who can quickly consider and remedy problems that we may encounter). But the Blueprint’s principles, while a good start, are non-binding and do not constitute U.S. governmental policy. So, policymakers must agree upon the principles to address potential harms from AI (including antitrust, privacy, and consumer protection concerns), and how to implement these principles into law.

A third concern involves the broader ecosystem underlying AI. A few data-opolies currently dominate many key sectors of the digital economy. Given the volume and variety of data and tremendous computational power to teach AI, the fear is that “training the most powerful AI systems will likely remain prohibitively expensive to all but the best-resourced players.” So, while there may be many applications that use ChatGPT or PaLM, only a few powerful companies will control the infrastructure underlying these generative AI models. Here, policymakers must consider the broader societal harms from such concentration, including the impact on innovation paths, and the tools they will need – including updated antitrust, consumer protection, and privacy tools – to mitigate these harms.

Consequently, addressing algorithmic collusion and discrimination requires a multi-faceted approach. The narrow approach informs policymakers of the strengths and weaknesses of their current antitrust tools. The slightly broader approach can point the agency to other tools, such as merger review (or for behavioral discrimination, privacy law and consumer protection measures), to minimize these risks. Finally, any comprehensive policy response must address not only the algorithmic collusion and discrimination concerns but the other myriad risks associated with AI. Thus the broader ecosystem approach seeks to promote not only competition but human autonomy and well-being in an increasingly algorithm-driven digital age.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.