Oliver Budzinski and Victoriia Noskova discuss in their publication why merger simulations are not more widely used by competition authorities and in front of the courts to predict future effects of mergers despite advancements in availability of data, AI and computational power. The institutional setting is an essential factor for computational antitrust tools to be accepted and applied by competition authorities.


Merger simulation is a tool of computational antitrust that applies advanced economics and econometric methods to quantitatively predict the effects of a merger between companies based on assumptions of pre-merger situations.

Decisions to allow, block, or modify any notified merger in virtually all merger control jurisdictions of the global economy are made based on an assessment of expected post-merger effects. The main traditional tools to assess these effects are two types of expert opinions: industry witnesses and experts outside of the industry (e.g. academics). These opinions are more or less influenced by individuals’ subjective views and interpretations and could be driven by the incentives structure of an expert, especially in the case of industry witnesses. Thus, the decision-making body (be it a court like in the U.S. or an administrative body like in the E.U.) is often confronted with opposing assessments.

Part of the interest in merger simulation models is based on the anticipation that a data-based, quantitative approach offers a more neutral and more objective prediction about the merger’s effects reducing the ambiguity of merger control controversies, leading to better decisions for competition and welfare.

Regardless of the variety of types, most merger simulation models have common underlying assumptions and could be described with the following procedure: (1) building a model of the market with selection of a shape of demand and a form of competition; (2) calibration based on past (pre-merger) data in order to estimate the accuracy of the assumptions of the competition model and the demand function; (3) final full simulation based on previous steps. As an outcome, a new post-merger situation is predicted. Simulations usually aim to predict effects on market prices and quantities as well as on consumers’ and producers’ rents, but can also include predictions of product variety and other relevant factors of competition.

DoMerger Simulation Models Provide Better Predictions?

We consider how advancements in computational power and methods could influence the assessment of the relevance of merger simulation models in comparison to about a decade ago. We address four areas of reflection that should be considered when pondering the role of merger simulations in antitrust practice starting with the ability to provide better predictions of merger effects. 

Do merger simulation models offer more accurate predictions than witnesses and experts? Indeed, elaborated simulation models may provide a good fit of the pre-merger market development (due to calibration with real-world market data). While depending on modeling and data-analyzing competences as well as on data availability, merger simulation models do satisfy the promise to reduce the scope of possible biases towards personal believes and opinions.

At the same time, a number of considerations limiting the power of merger simulation models should be considered. Although there is development in modelling techniques and computational powers, the degree of complexity from real-world markets often still cannot be fully captured. An example would be the modelling of more complex market environments like digital ecosystems. In addition, predictions of post-merger effects are always based on assumptions that the fundamental structure of competition remains the same after the merger. Only if this is the case, one can expect an impressively high-quality prediction.

However, some cases such as mega-mergers and mergers that significantly reshape the competitive landscape are likely to introduce structural breaks, altering the dynamics of competition. While in total these cases may be few, they play an especially crucial role in merger control decisions and in protecting the competitive process because these are usually the critical cases in which authorities consider intervention (whereas the majority of mergers remains unchallenged in practice). Thus, the accuracy of merger simulation predictions significantly decreases when structural breaks occur. Since the likelihood of structural breaks as well as deviations from historical patterns is increasing over time, merger simulation models are more effective in predicting short-run effects than mid- to long-run effects.

Furthermore, the unpredictable nature of innovations and external markets’ shocks contribute to this pattern. In general, this limitation is not unique to merger simulation models and applies to all types of predictive, forward-looking evidence. Since the future is not predetermined and is only created as events occur, even the most advanced computational prediction methods cannot offer fully accurate numerical predictions. Recognizing this limitation becomes important in certain institutional settings.

With respect to advancement of computational power including artificial intelligence (AI), some of the abovementioned limitations could be lessened (see discussion in publication) although not fully eroded and increase the hypothetical effectiveness of merger simulations.

Institutional Settings of Antitrust Policy

A review of crucial and controversial merger cases shows merger simulation models are used infrequently, meaning that institutions are disregarding potential improvements of this tool. Thus, a better understanding is needed of the institutional conditions for capturing the benefits of merger simulation models.

From the perspective of institutional-economic theory, hypothetical effectiveness is not enough – the tool should fit into the rules and practices that shape the actual process of merger control, i.e. its institutional framework. The main issue for the actual effectiveness of merger simulation models is how computationally-produced evidence is perceived and accepted by authorities and courts, and how it intersects with the institutions of the administrative and legal procedure.

A lack of fit with institutions is the main barrier to the successful use of merger simulations and computational-quantitative evidence in general. This under-use of the tool has already contributed to the enforcement gap in merger control where potentially anticompetitive mergers were allowed. On the bright side, the situation has gained recognition and has led to calls for stronger merger control towards all types of mergers, and even to discussions on companies’ breakups and reversing previous decisions like Facebook-Instagram-WhatsApp mergers.

The reason for the lack of fit is that the existing design of procedural rules and practices is not suitable for immanent characteristics of evidence produced by sophisticated quantitative tools, including merger simulation models. Among the reasons is the complexity of computational instruments, which is incongruent with the typical knowledge, and incentives of judges and courts, making it necessary to include testimonies of experts on the quality of merger simulation models as a tool for merger review.

This frequently leads to a battle of experts from competition authorities and from companies, consequences of which are driven by the allocation of the burden of proof: if judges cannot identify the more accurate among competing merger simulation models presented by the parties, they are incentivized to rule against the side that carries the burden of proof, as this side can be ruled to having not provided sufficient evidence. This de facto increases the standard of proof applied towards computational predictive tools, although unintentionally and more as a side effect rather than a deliberate decision from the competition policy makers.

Moreover, the predictive nature of merger simulation models could create a false sense of precision and a false sense of certainty since they do give a specific numerical outcome. Such outcome could easily be challenged by a counter-party and create doubts in trustworthiness of the results, e.g. by showing how minor changes of parameters could lead to differences in outcomes. This may undermine the trust of judges and juries in the merger simulation models even despite the directions of the effects remain robust and unchallenged.

Counter-intuitively, the advancement of computational power (including AI) could have an opposite effect on actual effectiveness of merger simulation models and contribute to increase the lack of  institutional take-up by multiplying the concerns raised above (based on the actual experiences with merger simulation models in several high-profile cases like Volvo/Scania, Nuon/Reliant, Oracle/PeopleSoft, most recently AT&T/Time Warner).

Costs and Benefits to Antitrust Authorities

As a general rule, an effective tool should be characterized by its benefits exceeding its costs of application. However, there is not yet clear evidence regarding the costs and benefits of merger simulation models. Performance of merger simulations is associated with specific types of costs which could be significant for competition authorities, e.g. costs connected with work with necessary data, costs of employing staff with expertise in advanced theoretical economics and econometrics, etc. Some costs could be reduced if promises hold true for computational power to become more available, provide more support in administration of merger law and reduce its costs. Still, such tools will remain suitable only for big and well-equipped antitrust authorities for the near future.

Merger Simulations for Previous Decisions

Other potentials for merger simulation models are their application in ex-post evaluation, which could be reviewed as a policy learning tool. Here, post-merger data could be used to understand whether the actual effects met the expectations of the past merger control decision. The knowledge created this way can be used for shaping better merger control rules and for drafting better decisions in the future. The data availability remains the limiting factor for ex-post tools, but more sophisticated computational methods may contribute to overcoming it.

Conclusion

In our paper, we identify and address four areas that contribute to understanding current trends in the application of merger simulation models. From an industrial economics perspective, an increase in the application of merger simulation models is clearly favorable, due to the promises this tool brings for an advancement of merger control based on neutral and more objective and precise data-driven predictions.

However, the empirical dynamics of merger simulation models shows that their use only increased up until the mid-2010s, and stagnated or even decreased afterward. An institutional economics perspective provides reasons why this is the case: because of the lack of institutional fit. The economic policy perspective that considers incentives within competition authorities and law courts is rather underdeveloped in this regard.

Still, the typical incentives for authorities and courts do not contribute to an extension in the application of merger simulation models and show ambivalent results of computational improvements associated with additional costs for those who apply them. The least controversial application for merger simulation models so far is an ex-post area of analyses, which is usually overlooked but is relevant for policy learning.

In summary, our main conclusion is: institutions matter! The success of a computational instrument in competition policy proceedings, both administrative and court-based, depends more on the institutional design of the proceedings rather than on the (still desirable) advancements of the methods themselves.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.