Academic writings on the optimal design of antitrust rules fail to pay sufficient attention to enforcement predictability as a relevant factor. In new research, Jan Broulík analyzes the various ways in which predictability is disregarded and their possible underlying reasons.

Society benefits from antitrust most when it discourages all anticompetitive (harmful) conduct by companies without discouraging any procompetitive (beneficial) conduct. Failing to do the former is known as under-deterrence and the latter as over-deterrence. Antitrust enforcement needs to be both accurate and predictable to avoid these two pitfalls.

Accuracy refers to how much conduct that is found to be lawful/unlawful overlaps in reality with conduct that is beneficial/harmful to society. Accuracy tends to be greater when rules mandate that enforcers carry out an extensive factual analysis in every case. Such extensive factual analysis has become the default standard in the United States’ legal system for adjudicating possible violations of the Sherman Act. Accurate enforcement prevents over-/under-deterrence by associating sanctions only with harmful conduct and not with beneficial conduct. If companies expect to be punished for actions that are actually beneficial to their customers, workers and suppliers, they will not take those actions. Alternatively, if they expect not to be punished for conduct that actually inflicts harm, such as price gouging, they will engage in that conduct.

As for predictability, the notion refers to how well businesses can anticipate whether a specific instance of conduct will be found unlawful or lawful by regulators and the courts. Businesses are more likely to make a correct prediction when the rules applicable to their conduct do not ask for an extensive factual analysis but, instead, are simple and rely only on a limited set of easily discernible facts. This is how per se rules are supposed to work. When enforcement is unpredictable, companies don’t know in advance for which conduct they will eventually be sanctioned and for which they will not. It is then of no value that adjudicators are ex post able to perfectly distinguish between conduct that is beneficial and harmful. Companies will be discouraged from behaving procompetitively and/or encouraged to behave anticompetitively if they cannot tell what is legal and what is not already before engaging in the conduct. In other words, to completely avoid over-/under-deterrence, antitrust enforcement needs to be both accurate and predictable.

In reality, we have to settle for a second-best antitrust in which we must choose between the accuracy of extensive case-by-case assessments or the predictability of simplified rules. The eternal quest for antitrust policymakers—and scholars studying the issue—is thus to identify an attainable combination of accuracy and predictability that best achieves antitrust’s goal of deterring anticompetitive conduct without deterring procompetitive conduct. It is clearly undesirable if this trade-off is overlooked, be it due to misapprehension or intentional deceit.

Not taking predictability seriously

Antitrust scholars tend to disregard the contribution of predictability in deterring anticompetitive conduct, instead focusing excessively on the problem of error (i.e. inaccuracy). They often trade off the higher accuracy of case-by-case assessments only against the resources spent by public and private actors to implement such assessments (administrative costs). As Paul L. Joskow points out, however, administrative costs tend to be relatively small in comparison to the costs of over-/under-deterrence. So, if one takes into account only accuracy as a determinant of deterrence (while assuming no problems with predictability), the counterweight exerted by administrative costs will be only marginal. In other words, the value of case-by-case assessments will appear much more valuable as a method of deterrence than it would be if predictability was taken into consideration. If predictability was taken into account, it would become clear that case-by-case assessments actually lead to significant over-/under-deterrence due to their limited predictability.

Another mistake the literature makes is to bundle the societal costs of enforcement unpredictability with the administrative costs. The literature adds up these two types of cost and then pits the total against the costs of inaccuracy, suggesting that only the latter has to do with deterrence as the primary concern of antitrust. The former costs are, in contrast, portrayed as only an inevitable secondary nuisance. This may be true about administrative costs but the effects of lacking predictability are substantively the same as those of error. As explained above, they both take the shape of business conduct not being steered as desired by antitrust.

Similarly, a habit in the scholarship of referring to any deterred beneficial conduct or non-deterred harmful conduct as an “error” is another factor undermining the role of predictability in antitrust regulation. The habit conflates actual errors with over-/under-deterrence. The former notion should be reserved only for enforcement decisions that find beneficial conduct unlawful (type-1 error) or harmful conduct lawful (type-2 error). Such decisions are clearly distinct from actions taken by businesses to refrain from beneficial conduct or to engage in harmful conduct based on their information about enforcement. As explained above, enforcement that is accurate but unpredictable will still lead to over-/under-deterrence.

Take, for instance, Keith Hylton and Michael Salinger, who include in the definition of type-1 errors not only benign instances of business conduct that an adjudicator finds to be unlawful but also “benign occurrences that do not occur because of the belief that they could be challenged in court” even if they actually “would not be found in violation of the law if they went to trial.” They thus use the language of error to describe an instance of over-deterrence and, what is more, one that has actually been caused by unpredictability. This illustrates that over-/under-deterrence is the consequence while error and unpredictability are the causes. Failing to appreciate this difference obfuscates the respective contributions of accuracy and predictability to antitrust’s mission.

Reasons for the disregard for predictability

There seem to be two main possible reasons why the antitrust literature does not give predictability its due. First, antitrust economists mostly come from the economics subfield of industrial organization (IO). IO focuses on firm behavior and has much more to say about how business conduct produces competitive effects than about how laws (and the predictability thereof) influence business conduct. IO findings and arguments moreover significantly influence legal antitrust scholarship.

Second, some antitrust scholars may downplay predictability because it is to their benefit. Unpredictable rules require extensive case-by-case assessments, which in turn increases the demand for the services of antitrust lawyers and economists, who often publish as scholars. As observed by Louis Kaplow, one must then keep in mind that their advice on the design of the rules “may be tinged by self-interest.”

In addition, extensive case by-case-case assessments in practice rarely lead to a finding of harmful conduct for the potential violators. For example, Michael A. Carrier shows that the application of the rule-of-reason standard, whereby the courts calculate the benefits and costs of potentially anticompetitive behavior, virtually never leads to the finding of a violation. Given that antitrust practitioners use publications as advertising vehicles, they are likely to defend positions suiting defendants as their likeliest clients. Andrew I. Gavil and John E. Lopatka and William H. Page add that the litigants may also directly sponsor scholarship aligned with their cause.


Antitrust enforcement predictability does not necessarily serve violators. To be sure, one could advocate antitrust rules that are predictable but limit market interventions. As Ryan Stones argues, the Chicago School aimed to achieve this by emphasizing the harmfulness of type-1 errors. However, there is no need to bundle predictability with aversion to type-1 errors. It is absolutely justifiable to adopt rules that are simple to administer and do sometimes designate as unlawful conduct that is actually not harmful, as long as the deterrence effects are overall societally beneficial. The entire antitrust community needs to become less obsessed with the idea of “getting it right” in every case and start thinking seriously about second-order considerations such as predictability.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.