Drawing on her working paper, Giovanna Massarotto discusses three algorithmic approaches to how Google can fairly and efficiently share its data with rivals per the requirements of a court’s mandated remedy for illegally monopolizing the online search market.
We are constantly using online services like Google Search, which we access in exchange for our data. This data is essential for developing the quality of these online services, and the more data a digital company has collected, generally the more superior the product it can offer.
In the last year, two federal courts have found that Google monopolized the online search and digital advertising technology markets. These markets rely on data, and over the years, Google has built an unparalleled data infrastructure through its services, in part by engaging in exclusionary conduct that garnered it the lion’s share of the data users produce in these respective markets. Competitors like Microsoft and DuckDuckGo want access to such data facilities to compete more effectively in the online search markets.
As such, in early September, the court presiding over the Google Search case required Google to share some of its data with competitors for two reasons. The first is that requiring Google to share its data would “deny Google the fruits of its exclusionary acts” that allowed it to protect its dominance in the online search market through a set of exclusivity agreements to set its search engine as the default choice on a series of web browsers and mobile phones. Second, giving Google’s competitors access to its data would “promote competition” and give them the chance to build their own rival search indexes that determine the quality of their search results.
Qualified competitors meeting specific security standards will “receive a one-time snapshot of the relevant data containing Google Search Index” and syndication services, including Google’s search result and search advertising inventory, at commercial conditions. This will enable rivals to offer data-based services, including online search, on par with the quality of Google’s services. By next year, another judge is set to impose remedies on Google to address anticompetitive practices in the ad tech industry.
In addition to addressing anticompetitive conduct in the search and ad tech markets, forcing Google to share its data will have far-reaching effects on the tech industry as a whole. The future of the tech industry lies in artificial intelligence, whose underlying large language models require vast amounts of data upon which to train. Access to Google’s data will allow its competitors to develop competitive products in markets beyond online search and ad tech.
How to enforce data-sharing effectively
Google holds an amount of data that is unmatched, and any data-sharing obligations should include the sharing of data facilities, such as data centers, to be effective. In the Google Search case, by requiring a one-time snapshot of the relevant data, the judge enabled rivals to make a copy of Google’s datasets once. But this implies that rivals would need to duplicate data infrastructures similar to Google’s in order to store the same data. These infrastructures costs billions of dollars and their duplication is not only inefficient but also raises concerns from a privacy and environmental harm. It is not a coincidence that companies like Apple and Spotify find it more convenient to store their data in Google data centers rather than building their own data facilities.
Beyond the infrastructure costs, sharing a complex database connected to vast data servers poses a classic resource allocation problem in antitrust law: who should get access and how can fairness, understood as non-discrimination, and efficiency be ensured when establishing a priority order for access? In the past, sharing obligations concerned a physical bridge or an electric grid. In Otter Tail (1973), the Supreme Court imposed on the electric company the duty to sell power to municipalities upon request, which raised an important issue:“[h]ow was Otter Tail to establish priorities among the various competing demands for the use of its grid? The majority’s opinion gave no clue,” some antitrust scholars have observed. A complex problem has become even more so in the digital age. Novel solutions are needed if the prescribed Google remedy is to succeed.
To address this question, my new working paper, “Algorithmic Remedies for Google’s Data Monopoly,” proposes a framework of three algorithmic approaches for non-discriminatory and efficient resource sharing. The paper applies the approaches to Google’s monopolization cases to guide data-sharing remedies in addition to promoting competition in AI and other data-driven markets.
These approaches stem from algorithms solving the mutual exclusion problem in computer science, also known in antitrust as the resource allocation problem, which computer scientists began studying about sixty years ago. They successfully figured out how to share indivisible resources that cannot be used by multiple people at the exact same time without creating a conflict. Consider a printer in an office shared by several people, who cannot print simultaneously. The printer needs a program made up of one or more algorithms that tell it who can print first by setting a priority order.
Mutual exclusion also concerns databases, where simultaneous access by multiple processes could corrupt data or create inconsistencies, leaving the database in an unreliable state and potentially causing it to stop working. Mutual exclusion algorithms address this problem by providing the strongest coordination guarantees and establishing a baseline for coordination: they prevent the shared resource from entering an invalid state. A similar coordination challenge appears in law, when deciding who gets access to a shared resource, such as a shared property (e.g., a house). In antitrust, the more specific scenario asks how a monopolist ought to share its key facilities with rivals efficiently and in a non-discriminatory way to maintain a competitive market.
Our three algorithmic approaches to regulating Google’s data monopoly include token-based, permission-based, or quorum-based approaches. In the first approach, a token is a unique message that can be seen as a digital right, used to identify who can safely access the shared resource by excluding others. This is analogous to a patent in law, which grants inventors exclusive rights to a technology. Similar to a token, patents require rules to transfer or license a patent. In computer science, the token is transferred, for example, by asking for the token. A priority order, that is, the sequence determining which computer obtains the token next, can be established through token-asking mechanisms, which require anyone seeking access to the shared resource to send a request. Every request includes a unique sequence number, which enables the system to maintain an ordered queue of access requests, thereby promoting fairness. Applied to the Google Search case, a token-based approach implies that Google’s rivals would need to ask for a token before accessing Google’s data facilities. Each request would come with a unique sequence number to distinguish old from new requests.
In a permission-based system, instead of a token, access to the shared resource relies on permission coming from all participants in the system. For instance, in the Google cases this could include all the parties with a stake in the case: Google, the Department of Justice, the state attorneys general and Google’s rivals. Rivals interested in accessing Google’s data facility would ask for everybody’s permission because a unanimous consent is required. The permission process would be automated, and if rivals refuse to cooperate, none of them would be able to access the shared resource, since each would still depend on the others’ permission. Timestamp mechanisms are typically used to set an order of priority among competing requests to ensure non-discriminatory and efficient access. This is comparable to the public land registry, which prioritizes deeds and mortgages according to which is recorded first.
The third approach, a quorum-based approach, focuses on a decentralized decision-making process which relies on a subset of computers rather than on all computers which are part of a system. By reducing the exchange of messages and permission necessary to access a shared resource, this method is typically considered more efficient. Fairness, on the other hand, is guaranteed through a decentralization of the decision-making process. Quorum-based algorithms also use timestamps to establish a priority order among competing requests. Quorum mechanisms are common also in law. For instance, in corporate law, the board of directors typically act only when a quorum is present, and decisions are taken by the majority of the directors present.
We have reached a decisive moment in the United States Google antitrust cases. Data sharing represents one of the primary antitrust remedies available in these cases. The digital economy we are living in today requires new solutions to decentralize the data that runs our economy. The algorithmic framework developed in my paper is based on mathematical principles working theoretically when resource sharing is required in the context of antitrust regulation. It can be employed when a natural monopoly in a public utility regime is required to share its resources with the public or a common carrier provide nondiscriminatory access to an essential resource or service. It can also help guide regulation in a rapidly changing technological environment orbiting around the growth of AI to help maintain fair and efficient markets.
Author disclosure: In the last year, the author has received financial support from the Wharton Initiative on Financial Policy (WIFP) and Regulation and Innovators Network Foundation. No funding source influenced the arguments expressed in this article or stands to benefit from them. You can read our disclosure policy here.
Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.
Subscribe here for ProMarket’s weekly newsletter, Special Interest, to stay up to date on ProMarket’s coverage of the political economy and other content from the Stigler Center.





