In new research, Niuniu Zhang discusses how regulators can add “noise” to market data to preclude tacit collusion through algorithmic pricing software without hampering legitimate market practices.


Algorithmic pricing systems already decide how much we pay for rent, groceries, flights, and nearly everything sold online. Behind every “smart” price tag sits an algorithm digesting torrents of market data to recommend the “optimal” price. What began as a tool for efficiency has quietly become the indispensable fabric of the digital economy.

But when competing firms turn to the same or similar pricing algorithms, something eerie happens: prices start moving in lockstep. No phone calls, no secret meetings: just parallel code learning from the same data. The result can look indistinguishable from an old-fashioned pricing cartel, except there’s nothing for antitrust enforcers to subpoena. This tacit collusion (as opposed to that explicit collusion leaving paper trails) has been particularly prevalent in rental markets, with the Department of Justice suing RealPage, a property management software provider, for enabling landlords in several geographical markets to use its pricing software to coordinate and raise rents above competitive levels. Instead of waiting to catch this elusive form of collusion after the fact, policymakers can blunt it at its source. In recent research, I explore the idea that by regulating the fidelity of the market data these algorithms consume—by making it just noisy enough—we can slow or even prevent collusion before it begins. 

What is algorithmic collusion?

Imagine you run a gas station in Westwood, Los Angeles. You want to raise prices, so you pick up the phone and call the competitor across the street: “Let’s both bump the price up by ten cents.” That’s classic price-fixing, explicit coordination, and a clear violation of antitrust law. What makes you liable isn’t the higher price itself, but the act of communicating to coordinate.

Now fast-forward to today’s algorithmic economy. Suppose you’re a landlord in Santa Monica, Los Angeles. Like many others, you adopt an automated pricing tool that scans local rental listings, estimates demand, and recommends daily rents to maximize revenue. You never talk to another landlord. Yet from the outside, rents across the city begin to rise in tandem, week after week, building after building. No collusive meetings, no emails, no handshakes. Still, the outcome closely resembles a housing cartel where rental prices are illegally fixed. That’s algorithmic collusion: coordinated market outcomes without coordination in the traditional sense. The software doesn’t need intent, as it just learns from the same data, reacting to others’ moves until prices converge. What used to require a smoke-filled room can now emerge from a shared spreadsheet.

Policing this behavior has proven nearly impossible. Earlier this year, Santa Monica banned algorithmic rental-pricing software outright, citing evidence that it was pushing rents higher during a housing crisis triggered by wildfires. It is less a triumph of regulation than an act of desperation, as pricing tools are now integral to modern commerce. The problem lies in outdated laws that can’t recognize algorithmic coordination without a smoking-gun email.

The 2023 lawsuit and subsequent investigation into RealPage for coordinating illegal price-fixing among landlords illustrates this gap. Landlords weren’t punished for using algorithmic pricing software; charges were pressed only after investigators found that 14 of them had agreed to share competitively sensitive data and collectively adopt RealPage’s price recommendations. Enforcement caught the exchange, not the code. In other words, the problem wasn’t the algorithm itself: it was the human fingerprints surrounding it. The gap is growing fast. Algorithms update in milliseconds; courts move in years. As pricing tools advance, relying on elusive proof of communication is like enforcing traffic laws with a horse-drawn patrol. 

Why does regulating data fidelity work? 

No matter how sophisticated a pricing algorithm is—reinforcement learner, large language model, or traditional optimizer—it shares a common “weakness”: it needs market data to function. Control the input, and you can meaningfully shape the output.

This is not a radical idea. Computer scientists have been managing information fidelity for years through differential privacy, a technique that adds carefully calibrated noise to data so that individual records stay private while overall statistics remain accurate. Even Google’s latest VaultGemma language model uses it to prevent malicious parties from recovering sensitive details like credit cards or personal identifiers from the training data.

Here’s how it helps. Imagine a future where every pricing algorithm has perfect information: real-time access to every rival’s demand and inventory. In that world, prices would synchronize instantly; collusion would become the default. Now imagine the opposite extreme: the market data is so noisy that algorithms can barely infer anything useful. Prices would swing randomly, undermining both firms and consumers. Revisiting the Westwood gas-station-owner scenario, imagine you check your pricing algorithm. Instead of receiving a clean feed where every nearby station is exactly $4 per gallon, it now sees a slightly noisy signal—$3.94 here, $4.02 there, even $10 when there’s too much noise. With moderate noise, the algorithm still works, but it no longer tracks competitors perfectly; it optimizes against a fuzzier picture of the market rather than a mirror image.

Think of it like driving through a light fog: you still see the road and the car ahead, but not well enough to tailgate. That modest blur keeps drivers safe by deterring reckless bumper-to-bumper behavior without blinding everyone. Injecting modest noise into pooled market data serves as that fog; it preserves visibility for legitimate pricing but prevents algorithms from driving bumper-to-bumper.

The power of this approach lies in being preventative. It doesn’t wait for collusion to happen or for regulators to prove intent. It acts directly on the raw material of algorithmic coordination, the data itself: instead of reading a competitor’s exact price of $5, the algorithm might see $5.2 or $4.8, a well-measured variation that keeps coordination in check. And because every pricing algorithm, from rental pricing to supply-chain optimization, relies on data inputs, the principle generalizes broadly. With proper calibration, data-fidelity regulation can blunt coordination before it starts. It sidesteps the impossible task of finding a digital smoking gun and instead makes the gun misfire.

Every drug has side effects

No medicine is completely risk-free, and no policy is either. Arbitrarily increasing noise to market data isn’t a cure-all for algorithmic collusion. As with the fog on a highway, the right amount can make driving safer, but too much and everyone loses sight of the road. If regulators go too far, even honest businesses could struggle: your neighborhood bodega might wake up charging $50 for coffee one morning and fifty cents the next. The goal isn’t to scramble prices, but to prevent them from moving in perfect unison. There’s always a middle ground, enough blur to break collusion, but not so much that everyday pricing becomes erratic.

But, how do regulators know where the balance lies? They start by defining what “too far” means — setting clear guardrails for how much legitimate business profit and consumer welfare can reasonably fluctuate. These limits can be drawn from historical data or policy goals. Once those thresholds are set, my findings can be used to identify the highest safe level of noise that keeps market outcomes within range. The higher the noise, the harder it is for algorithms to coordinate—but because the ceiling is mathematically defined, regulators don’t risk crossing the line. It’s not a guess or a trial; it’s a calculated balance that limits harm while preserving competition.

The beauty of this approach is that it’s tunable. As markets evolve, regulators can revisit the acceptable tolerances for profit and welfare distortion, and the noise automatically adjusts to stay within those limits.  In practical terms, the policy works like a thermostat: precise enough to preserve fair competition, steady enough to keep both businesses and consumers comfortable. The principle is simple: moderation guided by evidence, not extremes.

Policy takeaway

The bigger picture is about timing. Traditional enforcement waits for proof, emails, or calls, long after prices have already aligned. Algorithms, meanwhile, can iterate millions of times before a single case begins. By the time enforcers act, the damage is done. In this sense, prevention beats punishment. We don’t need to outsmart or outlitigate the algorithms. We just need to level the playing field by removing their market clairvoyance. A small, well-calibrated amount of noise today can preserve fairness for both consumers and businesses tomorrow. In an economy increasingly run by code, the right kind of blur might be our clearest defense of fair competition.

Author Disclosure: The author reports no conflicts of interest. You can read our disclosure policy here.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.

Subscribe here for ProMarket’s weekly newsletter, Special Interest, to stay up to date on ProMarket’s coverage of the political economy and other content from the Stigler Center.