Two recent articles by Michal Gal and Ramsi Woodcock look at pricing algorithms and consumer harm. Both identify potential nightmare scenarios and question the ability of existing antitrust laws to prevent them from happening.
Over the past several years, there has been significant interest in the question of how advertising-supported, “free” online services such as search and social networking should be viewed through an antitrust lens. But somewhat less attention has been paid to the rise (and some of the consequences) of another feature of the digital economy: the growing use of algorithms to set and adjust prices.
Two recent articles look at pricing algorithms and consumer harm. The first article, “Personalized Pricing as Monopolization” by Ramsi Woodcock focuses on the ability of firms to use data, analytics, and artificial intelligence to show each consumers his or her own individually-tailored price. In this scenario, which Woodcock predicts will grow increasingly prevalent in the economy, a firm is able to collect and analyze enough information about an individual’s willingness to pay to set a price at the individual’s “reservation price,” that is, the highest price an individual is willing to pay and not walk away. In effect, companies will use our own data against us. The goal of personalized pricing, as Woodcock sees it, is for the seller to capture all the gains of trade (“consumer surplus”), leaving the buyer effectively no better off than if he or she had not transacted at all. Personalized pricing approximates what economists call “perfect price discrimination.” In the past, perfect price discrimination was largely confined to theory, but no longer.
The second article, “Limiting Algorithmic Cartels” by Michal Gal, builds on experimental evidence suggesting that certain pricing algorithms are able to learn on their own to coordinate and to push prices above a competitive level. Like effective cartels, these algorithms are able to return to high prices rapidly by “punishing” a “cheater” that cuts its price. Significantly, there does not have to be an agreement or “meeting of the minds,” a required element for an antitrust violation involving collusion. Companies do not need to use identical pricing algorithms. Nor does anyone need to intentionally develop a pricing algorithm with the object of price coordination in mind. Rather, having been programmed to maximize profits, an algorithm may figure out this pricing strategy on its own through trial and error. In other words, computerized price setting may give rise to new and perfectly legal form of price-fixing, even though price-fixing is the “supreme evil” of antitrust. To date, these “algorithmic cartels” are largely confined to the laboratory setting, but there is some evidence that this behavior is beginning to show up in the real world.
From a consumer’s standpoint, both of these scenarios are troubling. In both, automated price setting via algorithms and artificial intelligence can make us pay significantly higher prices. Both result in what the authors term “negative welfare effects.” As Gal puts it, “the threat to consumers is no longer science fiction.”
What Can Current Antitrust Law Do?
Woodcock and Gal have identified two speeding trucks barreling down the information superhighway. The question is whether anything can be done to stop them.
Woodcock dismisses the Robinson-Patman Act, which is explicitly aimed at certain forms of price discrimination and would seem like the most obvious place to start. He notes that the Act does not apply to the pricing of goods sold to consumers, and is rarely enforced even within its limited ambit, making it useless as a tool for banning personalized pricing. Gal notes that Section 1 of the Sherman Act requires some sort of agreement, and firms are also free to look at what their competitors are charging and build that information into their own price setting. So she is skeptical that current law can reach algorithmic cartels.
Several years ago, the Wall Street Journal ran a story about how Staples, Discover Financial Services, Rosetta Stone, and Home Depot were adjusting online prices and displaying different product offers based on characteristics that could be discovered about the user. Staples, for example, was looking at the computer’s IP address to check what brick and mortar retailers were nearby and adjusting prices accordingly. (If there was a competing store nearby, the user would be shown a lower price.) In another story, the Journal reported that Orbitz was showing higher priced hotels to people using Macs than it was showing to people who used Windows computers. Presumably people who buy Macs like to spend money, or at least Orbitz thought so. But that may be just the tip of the iceberg.
When a firm changes prices over time, Woodcock notes, it may be looking at new information about demand (dynamic pricing) or old information about consumer characteristics (price discrimination). Both dynamic pricing and price discrimination have existed for a long time, but are becoming more widespread thanks to the advances in computing power.
All price discrimination, according to Woodcock, has “a single ultimate end”—“to charge each individual consumer a price personalized to match that consumer’s maximum willingness to pay for the product, because charging the highest possible prices that consumers are willing to pay maximizes profits.” But achieving that goal is difficult because it requires “hyper-accurate information” about consumer willingness to pay. Charge a price too high, and no profit is earned at all because the consumer will not buy. Charge a price too low, and money is left on the table.
This problem is in the process of being solved by technology. As firms learn more about their customers, and artificial intelligence and machine learning make it easier for them to understand the data, firms will improve their accuracy in predicting how much each individual consumer is willing to pay. The airlines will no longer rely only on the time when a consumer purchases in trying to infer whether a consumer is willing to pay more. “Purchase histories, income data, the movement of the consumer’s mouse on the airline’s webpage, and much more, will give the airline a rich portrait of who the consumer is, and through that picture, how much the consumer is willing to pay. As this learning process spreads across the economy, consumers will enter a world in which the consumer will pay a price personalized with increasing accuracy to equal the maximum the consumer is willing to pay for every single purchase that the consumer makes.”
Facing this potential nightmare, the question is whether antitrust law is able to address it? Woodcock dismisses Robinson-Patman, the law most directly aimed at price discrimination. But he argues that perfect price discrimination may be a form of monopolization which is reachable under the Sherman Act. His argument is both somewhat novel and complex. My guess is that most generalist judges would not follow his lead in interpreting the law on refusals to deal, market definition and monopoly power. Still, Woodcock takes on what could soon become a pernicious problem, and makes an creative stab at a solution.
Gal’s paper on algorithmic cartels focuses on a different nightmare: Some pricing algorithms are able use data and machine learning to raise prices above a competitive level by coordinating with other pricing algorithms. As EU Commissioner Margrethe Vestager quipped: “not all algorithms will have been to law school. So maybe there [are] a few out there who may get the idea that they should collude with another algorithm.”
The gist of the problem is that pricing algorithms do not need to be told what to do; they are able to figure out this strategy on their own. Commonly used reinforcement learning algorithms are able to come up with the strategy and successfully execute it, as demonstrated in a laboratory study by Calvano et al. published in Science magazine. The coordination arose with no human intervention, leading to prices that were almost always substantially above the competitive level. Moreover, prices quickly returned to a supra-competitive level even when they were externally forced to be competitive (via a “shock” simulating “cheating” on the cartel price by one of the algorithms). And importantly, as Gal notes, the algorithms did not condition their strategy on rivals’ commitment to stick to the supra-competitive equilibrium, and did not communicate directly (although, of course, they monitored the prices of competitors).
How concerned should we be? Gal notes that pricing algorithms, including those that are able to reprice in response to competitors’ pricing, are already used by a significant percentage of companies in the United States and Europe. So the technology is out there. The risk seems to be greatest in consumer-facing businesses that involve many small transactions. Indeed, Gal points to a recent study that suggests algorithmic coordination may have occurred in the German gasoline market. The study estimated that prices increased substantially (9-28 percent) after both firms in a duopoly market switched from manual to algorithmic pricing.
Moreover, follow-up research by Calvano et al. suggests that pricing algorithms can cope with more complex economic environments with imperfect information and imperfect monitoring. As Gal puts it, “Once we introduce algorithms, not only does oligopolistic coordination become more durable, but it may also actually be facilitated in non-oligopolistic markets, in which more competitors operate.”
All of this leads, again, to the question of whether current antitrust law is able to address this problem. Gal suggests the answer is “no.” She points out that the law requires there be an agreement (in the language of the Sherman Act, a “contract, combination . . . or conspiracy”) for there to be liability. This element is missing. That has led Calvano et al. to conclude that “the increasing delegation of price-setting to algorithms has the potential for opening a back door through which firms could collude lawfully.”
So where does that leave us? One possibility is to consider algorithmic pricing as a “plus factor.” In antitrust doctrine, “plus factors” are regarded as evidence that apparently parallel behavior should be treated as satisfying the agreement requirement. Gal considered this as an option in an earlier paper and mentions it in the current paper. It has the advantage of putting the problem and solution in terms that a judge can understand and that has legal precedent.
However, the problem with treating automated pricing in itself as a plus factor is that this would sweep in the innocent along with the guilty. Firms may have good reasons to switch to automated pricing, which may be both cheaper and better than the traditional way of pricing. Conscious of this concern, Gal suggests that the plus factor be limited to situations where, for instance, a company consciously uses a similar algorithm to a competitor even when better ones are available, takes steps that make it easier for a competitor to observe its algorithms or data, or locks an algorithm so that it is difficult to change and more predictable.
Gal suggests several other partial solutions as ways to “think outside the box.” I want to focus on one of these solutions: empowering consumers through technology. If AI can use algorithms to raise prices, a market-based response would be for consumers to band together and use algorithms of their own. Gal calls this “algorithmic consumers.” It is fighting fire with fire.
A simple example is a “shopping bot” that takes numerous sources of data into account and then executes a purchase and arranges for payment and delivery. Gal suggests that consumers take this a step further by banding together, especially when dealing with powerful sellers.
In countering algorithmic cartels, one of the advantages of algorithmic consumers according to Gal is that they can increase the size and decrease the frequency of purchases. The selling algorithms, like others, do not perform as well if they have less data. Also, if algorithmic consumers can include a large enough number of consumers, they could go offline for their purchases. Potentially that would trick pricing algorithms into believing that demand had dropped. It would be invisible.
Would there be circumstances where this wouldn’t work? How would it work in the purchase of gasoline, for instance? Also, this remedy depends on viable alternative sellers with enough capacity in the market. And that could turn out to be a problem.
One risk, mentioned by Gal, is that established online platforms could become the gateway for consumers. Platforms are developing their own digital assistants, and this could blunt a more effective consumer response. As Gal puts it, “[t]he stronger the algorithm provider’s market power, the smaller the benefit that will be passed on to the consumer.” The large online platforms, once the disruptors, are now the incumbents and their interests may not line up with the interests of consumers.
Still, in advocating for “algorithmic consumers,” Gal concludes: “It is high time that we not rely only on human regulation in order to deal with algorithmic coordination.”
Author’s note: the author would like to thank Spencer Weber Waller and the Loyola Institute for Consumer Antitrust Studies for their support.
Learn more about our disclosure policy here.