As a student, Booth School Professor Anthony Lee Zhang was puzzled that Paul Milgrom chose to spend so much of his time on spectrum auctions. Milgrom, co-recipient of the 2020 Nobel Prize in Economics, helped design the worldâ€™s first spectrum auction in 1994 and has been closely involved with the design process ever since. His work engages with the real world, gory details and all, and Stanford estimates the total amount of government revenue generated by spectrum auctions worldwide at over $100 billion.

When I joined Stanford GSB as a PhD student in 2014, I didn’t have a great sense of what I wanted to do. My sense was that there were two groups of economists. The policy-oriented folks spent a lot of their time looking for regulation changes and running a lot of regressions. The theorists were much more tech-ed up, solving abstract and beautiful mathematical models, but the connection between their work and reality sometimes felt a bit tenuous. There seemed to be a bit of a tradeoff, and I wasnâ€™t sure of my own marginal rate of substitution between theoretical elegance and policy relevance.

I started talking to Paul Milgrom towards the end of my first year. Paul, even before the Nobel prize, was a bit of a living legend among theorists. Paulâ€™s ten most cited works span five or six different subfields of theory.((Paulâ€™s extensive wikipedia page describes his contributions to various subfields.)) It was difficult to find any topic to work on which Paul hadnâ€™t made a seminal contribution to.

At the time, I was actually somewhat puzzled that Paul chose to spend so much of his time on spectrum auctions in particular. Spectrum auctions are a very specific problem. There is only one auction theory paper among Paulâ€™s top 10 publications in Google Scholar, and nothing that directly concerns spectrum auctions shows up on the first page. Why spend so much time on spectrum auctions, instead of something more general and foundational?

In my third year, I had the chance to work with Paul on a short policy proposal for the 3.5GHz spectrum band, from which I learned a couple of things. Dealing with actual policy was about as messy as I imagined. The FCCâ€™s report and order on the 3.5GHz band in 2015 runs for 187 pages. It made a great impression on me that Paul, legendary theorist though he is, knew the institutional details of this and previous auctions inside and out.

It struck me that Paul is not the kind of academic who pontificates from above, leaving the gory implementation details for others to figure out. Paul helped design the worldâ€™s first spectrum auction in 1994 and has been closely involved with the design process ever since. Two and a half decades after the field began, heâ€™s still leading from the trenches.

Paul recently led the design of the FCC Broadcast Incentive Auction, which concluded in 2017. What set this apart from previous auctions is that it was a *double auction*, aiming to buy spectrum from around 1000 TV broadcasters, repackage it, and sell it for broadband use.((Precisely, the FCC Website states that there were 1030 qualified stations in the reverse auction.)) This is described in detail in Paulâ€™s recent book, *Discovering Prices*, and Iâ€™ll describe it briefly here.

The basic idea is the following: Suppose stations A and B are broadcasting on channel 7, and C is broadcasting on channel 8. If the FCC buys spectrum from station B, we can free up all of channel 8 by requiring station C to move to channel 7. This â€śrepackingâ€ť process would then allow the FCC to construct connected pieces of spectrum to then sell for broadband use. The goal of the FCC in the reverse auction, then, is basically to find the lowest total cost at which they can buy up enough pieces of spectrum from TV broadcasters, so they can repackage the rest to free up enough spectrum.

This process is difficult because, if two TV broadcasters are too close to each other, they canâ€™t use the same spectrum channel. In the example, stations A and C can only be repackaged into the same channel if theyâ€™re sufficiently far away that their broadcasts would not interfere with each other.

Finding the optimal solution to the FCCâ€™s problem is kind of like putting together a thousand-piece jigsaw puzzleâ€”except the pieces are all differently sized, the edges donâ€™t fit together neatly, and thereâ€™s no picture on top to tell you whether youâ€™re anywhere close to the right answer. You might decide that the best solution probably involves putting C in the same channel as D, and then piece together the rest of the puzzle under this assumption. But you have no way to check whether your initial guess was correct, and if it isnâ€™t, it ruins the whole block of other pieces you build around them. Itâ€™s like what happens when you get halfway through a Sudoku puzzle and realize you made a mistake near the start.

How hard can this be? If there are only 1,000 stations, canâ€™t we just have a computer search through all the possible solutions? It turns out that these problems are incredibly hard, even for computers. If we have 1,000 stations, the number of different sets of stations we have to consider is 2^{1,000}, which is around 10^{300}.

To give a sense of how big this number is, there are around 10^{80} atoms in the universe. Imagine every atom in the universe contains another universe within it. And then every atom in each of those universes contains another universe, and so on. You would have to go down to the *fourth* subuniverse, to get enough atoms to simply *count *the number of possible combinations of stations the FCC needs to consider.

There is a kind of Hayekian problem here. Optimal resource allocation decisions are just big optimization problems. But they are *really* big problems, and solving them through brute computational force is almost unimaginably complex. Even the relatively small problem of buying 100MHz of spectrum from 1,000 US TV broadcasters would require vastly more than all the computing power in the known universe.

The magic, of course, is that there is a shortcut: the price
system. Under some circumstances, we can simply run an *auction*. The FCC presents a price to
each station, and each station announces whether they are willing to sell at
that price. The auctioneer starts each price high enough that there are more
than enough stations willing to sell, and gradually decreases the price.
Stations drop out one by one, until we find prices at which the market supply
of spectrum equals the amount we want to buy.

This does not always work. Without going into details, it requires that the FCCâ€™s preferences over licenses are *substitutes*. As long as the substitutes condition holds, though, the auction is guaranteed to quickly navigate through the vast space of possible station combinations, to find the single best one. From a mathematical perspective, the substitutes’ condition turns a basically intractable problem into one that is almost trivial. The definition of substitutes in Paulâ€™s book takes three lines, and the proof that the auction finds the optimal allocation takes around six.

The real world is complicated, though, and the substitutes’ condition isnâ€™t likely to hold exactly in practice: what happens when it doesnâ€™t? Paul shows the result isnâ€™t too dependent on the assumption holding exactly: when preferences are *almost* substitutes, in a sense Paul defines rigorously, the auction algorithm is guaranteed to finds an outcome which is close to the optimal one.((This is Proposition 4.6 of *Discovering Prices*.)) In a paper co-authored with Kevin Leyton-Brown, Neil Newman, and Ilya Segal, Paul finds that, in small-scale simulations, the auction does quite well.

A common theme in mathematics is that subtlety and complexity can arise from the simplest of assumptions. Fermatâ€™s last theorem fits in a margin of a notebook, and took 358 years and hundreds of pages to prove. Paulâ€™s work, in a sense, does the opposite. Volumes upon volumes of policy briefs and institutional constraints about complex allocation problems are distilled and refined, into simple theorems in which the classic ideas of supply, demand, and equilibrium shine through, and into market-like mechanisms for solving these problems.

I learned two things from Paul. First, in order to make a real difference in the world, academics must seriously engage with it, gory details and all. This is not always academically rewarding. Paulâ€™s work on spectrum auctions is not his most cited, and not all of it is known well even among economists.

But the economic impact of Paulâ€™s work on spectrum is immense. Stanford estimates the total amount of government revenue generated by spectrum auctions worldwide at over $100 billion. A 2012 PCAST report on wireless spectrum estimates the welfare effects of improving spectrum use to be over $1 trillion. I think it is entirely fitting that the Nobel committee chose to highlight auctions, and spectrum auctions in particular, among Paulâ€™s myriad contributions to economic theory.

Second, I learned that, as a 20-something going into graduate school, I was mistaken about the nature of the constraint set facing an economic theorist. Working on policy-relevant problems doesnâ€™t always require giving up on the search for theoretical beauty. Paulâ€™s work showed me that, hidden within the mundane details of policy problems, there is often a surprising amount of elegance, if you know where to look.