In new research, Giovanna Massarotto explains how collusion manifests differently in the digital economy. She argues that antitrust regulators, scholars, and courts need to incorporate lessons from computer science to update how they monitor markets and identify algorithmic collusion.


Enforcing competition principles in computer-run digital markets became critical in 2023. This past year saw the widespread use of algorithms, including artificial intelligence, in many facets of our lives. In business, algorithms have become a necessary tool to remain competitive. With their deployment comes concerns about potentially anticompetitive conduct by firms, such as algorithmic collusion to fix prices or allocate markets.      

Many legal scholars and legislators are proposing to use ex ante regulation to deal with digital markets (regulation that imposes precise prohibitions and obligations by default, such as the Digital Markets Act) instead of ex post antitrust intervention (which implies an investigation and a response to an infraction, such as a fine or a consent decree). However, ex ante regulation has been unsuccessful in the past, even in less dynamic physical markets, such as electricity and transportation. For instance, George J. Stigler and Claire Friedland’s 1962 study revealed “[t]he ineffectiveness of regulation” in electricity markets, mainly because the “regulatory body [was] incapable of forcing the utility to operate at a specified combination of output, price, and cost.” The capability of the government to regulate ex ante has become even more challenging in fast-moving technological markets run by increasingly sophisticated algorithms. Thus, I propose something different, which is to incorporate lessons from computer science into our current set of law-and-economics tools for the analysis of  antitrust violations ex post. Otherwise, the interpretation of how computers can engage in anticompetitive behavior remains incomplete, if not impossible.

To validate my proposal, I connect the collusion problems we deal with in law to problems of agreement on a shared value in computer networks. In my working paper, “Using Computer Science to Detect Cheat Tolerant Cartels,” I use the Byzantine Generals Problem (BGP) employed in computer science as an analogy for the cartel-cheating problem relevant in antitrust law. According to the original authors, the BGP proposes and seeks to resolve the following scenario:

We imagine that several divisions of the Byzantine army are camped outside an enemy city, each division commanded by its own general. The generals can communicate with one another only by messenger. After observing the enemy, they must decide upon a common plan of action. However, some of the generals may be traitors, trying to prevent the loyal generals from reaching agreement.

While competitors in a cartel typically need to agree on a price, knowing that some members can cheat by offering an even lower price to capture a larger share of the market, the BGP refers to the situation in which computers in a network need to agree on a bit of information (e.g., a price) while knowing that some computers might be unreliable. The intuition is that both the BGP and cartel deal with a “trust” problem in an agreement on a bit of information. However, whereas cartels are considered fragile, as they often collapse if one participant engages in cheating (by undercutting the agreed-upon price), protocols regulating a computer network operate not by excising undependable participants (the cheaters), but by tolerating a degree of cheating. Present cartel law does not fully capture this dynamic.

Since the 1970s, computer scientists have designed computer protocols that enable computers to agree on a value by tolerating “cheaters”—known as Byzantine fault-tolerant protocols. I propose that certain mechanisms from these protocols—what I call “cheat tolerance mechanisms”—should be included among the “plus factors” (indicators of collusion) that United States courts presently use to detect cartel agreements. In this way, I construct a framework to detect cartels more effectively by considering the characteristics that increase the likelihood of cheat-tolerant cartels.     

Using the case of cheat-tolerant cartels, I show how a computer science perspective adds something novel that deserves to be considered in the present law-and-economics antitrust analysis.

Using Plus Factors To Identify Cartels

Although cartels are one of the most harmful ways in which firms can violate antitrust laws, cartel agreements are increasingly difficult to detect and prosecute due to an economic framework that has changed and includes increasingly sophisticated computational tools available to companies. 

Section 1 of the Sherman Act prosecutes cartel agreements in the U.S., and the Supreme Court stated that to prove a Section 1 violation, evidence of a tacit or explicit agreement is required. To facilitate the proof of a tacit “secret” agreement, U.S. courts have recognized certain plus factors that demonstrate the plausibility of collusion. These factors are based on both legal and economic evaluations, including price parallelism—the movement of competitors’ prices to mirror each other—and industry characteristics that facilitate coordination, such as high barriers to entry, significant market concentration, and regular communication. Despite the fact that a list of essential plus factors or a hierarchy of such elements does not exist, courts seem to agree on some factors, such as the four just described. Additionally, courts require the proof of some mechanisms designed by the cartel to deter cheating. Cartels cannot “long endure” without such a mechanism. For instance, in Kleen Products, the Seventh Circuit confirmed the summary judgment in favor of the defendants, recognizing that if there was a cartel “it would have tried to impose disciplinary measures on the ‘cheaters’ who did not go along with the price increase.” 

Through the analogy between the cartel cheating problem and the BGP, I show how computers can potentially enable strong cartels that are capable of tolerating some cheating.  

From Byzantine Generals to Cartel Cheaters

In 1982, three researchers at the Stanford Research Institute (SRI)—Leslie Lamport, Robert Shostak, and Marshall Pease (LSP)—developed solutions to the problem of computers reaching an agreement when some computers send unreliable signals, which they called the “Byzantine Generals Problem.” Among the solutions that LSP developed to resolve the BGP was the calculation that at least four generals are needed to tolerate a cheater if there are no mechanisms, such as cryptography, to detect cheaters. They found that the problem is impossible to solve with three generals without signed messages (e.g., cryptography), because there is no way for the loyal generals to detect who is lying and isolate ‘the traitor’ to enable the loyal generals to reach an agreement. With four participants and multiple rounds of communication, LSP’s study shows that an agreement can be reached by tolerating less than one third of traitors as loyal generals get a majority value which is the correct value. In the context of markets, this implies that at least 75% of cartel members need to follow the cartel scheme if there are no mechanisms to detect cheating. In contrast to most economic studies on cartels, the LSP theorem reveals that theoretically a cartel with many members is not any more unstable than a cartel with a few members and can be cheating resistant. 

Through the analysis of the BGP and its solutions, my study identifies five mechanisms that, if present, along with the legal and economic plus factors used by courts, suggest a cheat-tolerant cartel exists. 

Cheat tolerance mechanisms that I identify are: 

  1. cryptography;
  2. four participants;
  3. broadcasting;
  4. leader election;
  5. private channels. 

I assessed whether these mechanisms are already captured by the present plus factors that courts have adopted to identify a cartel agreement. Broadcasting implies that a computer sends a message to all the other computers connected to the network rather than having point-to-point communication. Thus, broadcasting resembles public announcements, which can be easily captured by the existing plus factor that refers to “regular communication.” Similarly, the fact that in a cartel there is a leader to facilitate coordination among participants is something quite common in the context of cartels that courts consider, although there might not be a formal “leader election” plus factor. In computer science, a leader is elected to introduce a single point of coordination, the leader node, which typically decides upon the agreed value. Conversely, the fact that a cartel needs a minimum of four participants to tolerate a cheater is not part of the current plus factors. The use of cryptography and private channels to secure communication through encryption and deal with cheating would also be novel to cartel agreement analysis. Thus, I suggest considering these three “cheat tolerance mechanisms” as new plus factors in the identification of cartel agreements. 

Not only courts, but also antitrust agencies and law enforcers more generally, can benefit from the BGP framework to detect this type of collusion more effectively. For instance, in the previously discussed Kleen Products case, the court did not find a Section 1 violation. However, by using the framework I suggest, a cartel agreement resilient to cheating appears to be present. The minimum requirement of four participants was satisfied and the companies made public announcements reminiscent of broadcasting. For instance, it was noted that “[a]fter one company announced that it would raise its prices for containerboard, the rest followed suit with identical or comparable increases.” There was also evidence of a possible leader in the analyzed conduct. The vice-president of one of the defendants “made remarks in an email that could easily be construed as an undertaking to follow-the-leader,” observed the court. Additionally, “the defendants were in regular communication” and the traditional structural characteristics conducive to collusion were also revealed: high barriers to entry and market concentration.

As opposed to most computer science studies on collusion, which are mainly focused on the potential of AI algorithms as a standalone application to tacitly collude (currently legal in the U.S.), my paper investigates the agreement protocols ensuring consistency among computers. The premise is that you cannot conspire with yourself, thus mechanisms of coordination and consensus are necessary to violate Section 1. These mechanisms of coordination in the digital economy seem more sophisticated from those in the analog past, and antitrust regulators and courts need to update their tools for oversight accordingly.

Articles represent the opinions of their writers, not necessarily those of ProMarket, the University of Chicago, the Booth School of Business, or its faculty.