The Stigler Center for the Study of the Economy and the State hosted its annual antitrust and competition conference in late April. The following is a transcript of the Susan Athey’s keynote in conversation with Tommaso Valletti.


Filippo Lancieri

… the keynote conversation with DOJ Antitrust Division Chief Economist and Stanford University professor, Susan Athey, moderated by Professor Tommaso Valletti of Imperial College London. We are truly delighted to host you both. Before we begin, please note that we are on the record in live streaming of course, and will post this on the Stigler Center YouTube channel later. To those in the room, if you have questions, we’ll address them towards the end. And as usual, views expressed by the guests are their own and not those of the University of Chicago or the Stigler Center. It’s always a good disclosure. And we hope that you’ll join us too for more upcoming events. So please check the Stigler Center website for more details, as well as our publication ProMarket.org and the Capitalisn’t podcast.

Susan is the Chief Economist of the Antitrust Division at the US Department of Justice. She is on leave from Stanford University Graduate School of Business where she is the Economics of Technology Professor. She also serves as the 2023 president of the American Economic Association. She’s a recipient of the John Bates Clark Medal. She has served as a consultant chief economist and on the boards of multiple technology firms.

Tommaso Valletti is a professor of economics and the head of the Department of Economics and Public Policy at Imperial College Business School. Professor Valletti is a non executive director to the board of the Financial Conduct Authority, and a director of the Center for Economic Policy Research, among others. And previously, he held Susan’s job but in Europe, the counterpart.

Now, without further ado, I pass it on to Tommaso.

Tommaso Valletti

Thank you very much. Thank you, thank you, Filippo. Hello, everybody, good afternoon. Enjoy your lunch, and we will entertain you as we speak even more. There will be food for thought, not just for the stomach.

So Filippo has already introduced Susan, I was about to say that she has one of the coolest titles. She’s the Economics of Technology Professor at Stanford University, which I think is a very cool title to have. And she’s also the recipient of a lot of accolades. I will just add that she’s also the author of incredible scholarship over the past many years. I will mention, in no particular order, monitoring, comparative statics, empirical IO, auctions and market design, more recently, health, and obviously economics and econometrics and machine learning. Now she’s Chief Economist with the DOJ Antitrust Division. And that’s, I suppose, why you’re here today in front of this particular audience. So we’re looking forward to hearing from you. In terms of how we’re going to organize it, Susan will have initially her keynote for about 15 to 20 minutes, then we will have, say, 15 to 20 minutes max of chat between the two of us. And then there will be 15 to 20 minutes open for the general discussion with the audience. So there will be time, enough time for everybody to ask questions to Susan. Without further ado, Susan, the floor is yours.

Susan Athey

Great. Well, thank you so much for having me here. And also, again, today I’m giving my own opinions.

I thought I would start out by trying to redeem myself from the fact that, I guess collectively, we ruined a bit of this conference by not having the merger guidelines here to have the panel tomorrow discussing them. And of course, I can’t say anything specific about the guidelines or our guideline development process, or when they’ll be released. But I can share some of the divisions, thinking about the guidelines, why it’s so important that we undertake the process and some of the themes and priorities that I’ve been thinking about in attacking them.

One of the places to start is, a really important theme is just how important good economics is from an antitrust perspective, and in particular, in modern antitrust. And, you know, modern economic thinking has just been central to our success in trying to understand what’s going on and protect competition in the most effective ways. So on the last panel, Leemore talked about much more complicated patterns of acquisitions that have more complicated theories of harm. And it’s not that modern economics can’t handle that. I mean, it’s not that you cannot describe and write these things down. Sometimes the economics, the patterns are so complicated that you wouldn’t have just, out of the blue and in the abstract, written down an economic theory model of exactly that thing. Because in some sense, somebody might have told you you were a little bit exotic to imagine that businesses would be so creative 15 years ago. But it turns out that, when there’s money at stake, businesses figure out the economics and respond to their economic incentives. And when they figure them out, so can we. And we have been.

Really, the last several decades of economic thinking have been exactly about the kind of imperfect competition, informational issues, bargaining, and these more complex arrangements, platforms. We’ve been really expanding our thinking. Now in the end, that economics, it’s not like fundamentally new economics. I mean, we did have some breakthroughs in terms of information economics and so on. But ultimately, it’s about analyzing and studying incentives and competition. It’s just that there’s often now many steps in that logic. And we’ve needed to do the work to have the economic theory meet the fact patterns that we encounter. But ultimately, they can be understood. And the theme really then, in a lot of the work that we do at the Antitrust Division, is by starting and saying, “How is competition presenting itself here?” Not saying we have these three boxes, you know, which box is it in? But rather, what is the issue? What is the economics? How do the economics present here? And then we can come back to how it fits into our frameworks. But ultimately, starting with the economics is a really useful way to go. And, of course, that goes throughout the process. Even in court, you’re needing to explain why a transaction is occurring, what was motivating it, and then it all kind of makes sense. And that’s a very natural way to do things.

So when we think about the guidelines, we’re starting from, “Does a merger lessen competition?” And then what does that mean? So, maybe things were never really that simple, but from the perspective of now, you can imagine a nostalgic past where mostly, you know, two suppliers of hammers, two manufacturers of hammers might have merged or, you know, somebody making the metal to put in the hammer might have merged. And that was kind of most of what went on in the world, and that was most of what we needed to worry about, and life was easy back in the good old days. Probably it wasn’t so easy, but it might have seemed a little simpler.

And a lot of what we’re confronting now, these big complicated cases that are affecting huge sectors of our economy, whether it’s technology or health, if you try to just use those simplistic boxes, you’re just going to miss what’s going on. You’re not going to understand why firms are doing what they’re doing, and you’re not going to be able to understand the impact. So then, if we’re worried that a challenge can be trying to put square pegs in round holes, one approach to that is to take a few more steps to lay out the logic. Of course, in the middle of a court case or something else, there’s lots of opportunities for people to get confused, just by accident. And there’s lots of opportunities for confusion to arise through the process. But the more clear we have the basic foundations of the logic, the more time we can spend thinking about the facts and specifics of a particular case. So, just avoiding getting confused in logic that’s really clear if you think about it, but that might take a couple of steps to see how everything applies.

And so then, what are some examples of that? What are motivations for acquisitions that are, again, easy to understand with modern economics, but take a little bit of thinking in terms of how to apply concepts? Mergers can harm competition by creating barriers to entry, by getting control of a product that can be used to influence outcomes and the competitive process in another market. It could be trying to get exclusive use of some asset like data. And we might also see a merger that’s trying to block a nascent competitor. We’ve already had a lot of discussion about that earlier today. And so, why is that challenging? Well, the thing that looks small and different today might grow into something much bigger.

And in a bunch of markets, if you go and teach your MBA students, you know, “How could you start a business? What is a possible successful path?” There might be ten successful paths that are just not going to work in a scale-driven market or in a platform market. Like if an MBA walked in the door with that idea, you’d just say that’s a dumb idea. And a venture capitalist isn’t going to fund it. And it’s just not going to work. You have no plan to overcome the scale economies and network effects of the incumbents. But there are other pathways that can work. There are moments in time, where you have the opportunity to differentiate in some way, take advantage of a technology shift, be the first firm to get there with something that meets individuals’ needs. You can start in a niche. There are a variety of pathways that do work. But there’s a fairly limited playbook in some cases. And if an incumbent figures that out, they may be able to just systematically block any threats by buying up firms. Once they figure out what the categories of threats are, they can systematically buy up these firms when they’re small and, in principle, maintain their position indefinitely.

And so, once you think strategically, you think how do firms enter, how would a firm defend against entry, if they just did X, Y, and Z, and got away with it, you know, they can defend successfully unless some accident happens. So once you understand those economics, it becomes much more clear what kinds of mergers might be problematic, or what kinds of sequences of mergers might be problematic. Mergers also might involve firms that are competing head to head in innovation, or a firm that hasn’t brought a product to market yet, but has very strong incentives and might be one of the leading firms that has the ability to create that product. And again, trying to make sure nothing goes wrong, it’s a lot easier just to buy whoever else might innovate in that space, and let’s just make sure that everything goes my way. So these types of mergers can be incredibly important.

I think one thing that was maybe slightly missing in some of the earlier discussion about productivity is just the dynamics of it, and how important those dynamics are. You know, it’s great to get market share to the most productive firm today. But why the heck would you increase your productivity tomorrow, if there’s no threat of anybody displacing you? So these types of incentives for dynamic innovation can be very important and even more important than static considerations. So all of these different complex situations might arise. And it all really comes back to this question of how does competition present itself here.

So, what does that mean in practice? Well, one of the areas that I’ve been interested in for a long time are platform markets. And so let me talk a little bit about that. In some sense, probably by now most people are familiar with the logic. But on the other hand, when you go to explain the logic, or even teach the logic, there still are multiple steps, multiple forces and reinforcing effects. So just making sure that we’re clear about what that logic is can be really helpful.

So like, in platforms, one issue that can come up is that outcomes can be really affected by the extent of switching or multi-homing. And that logic is clear from just the basics of platforms. There are network effects. And so, if there are some important platform participants who can be primarily reached on one platform, that’s really going to affect competition. If you want to enter as a platform, and there’s some platform participants you can’t access as an entering platform, that’s a barrier to entry. It’s going to be hard for you to overcome these direct and indirect network effects if you don’t have access to important participants, if those are somehow locked up with one platform. That then can affect the incentives to acquire platform participants or create long term contracts with them or whatever else, because that can be a barrier to entry. And even if you can get them to stop using competing platforms, that can often also soften competition between the two platforms, because one of the platforms doesn’t have access to key participants. And in principle, if there’s some segmentation between those participant groups, then the platform can exert market power over the other side of the platform, which is an incentive to not make those participants available.

So this isn’t like rocket science. We all know this. We’ve seen this by now. And like lots of industries, there’s multiple replications of these back patterns. But it’s still a few steps of logic that can be you know, sort of complicated to play out. So you know, platforms can reshape competition in the sides of the market they serve. And that in turn feeds back into competition among platforms. And the competition among platforms feeds back into the ability to change the market structure in the sides of the market that depend on the platform. And this can be particularly important if there are scale economies or network effects or barriers to entry in the products themselves that interoperate with the platform. So if there’s some category of software applications, for example, and there are scale economies and network effects in those software applications, then the control of an operating system can really matter. They can pick winners, and short term behavior can have long term effects.

So everything I’ve said, every word, every sentence is like classic antitrust. But it’s all piecing together in a more complicated way with feedback effects. And more complicated incentives. But at the same time, we’re seeing this out there. We’re seeing these fact patterns across lots of different sectors, lots of different types of platforms. And we want to just, you know, start with the basics, make sure we understand that logic. And then be able to dig into the facts of a particular case, understanding that there are very common fact patterns and incentives that are not just a one-off but that are systematic across these.

Another place that we thought about making sure we spell out clearly and make sure that the economic logic is clear is in monopsony and monopsony power. And we’ve had already discussions, there’ll be more discussions tomorrow about this. We have leadership by our Principal DAG, Doha Mekki, our Principal Economist, Ioana Marinescu. I won’t go into all the details there. But maybe just one thing to remind any sort of skeptical economists in the room is, of course, economists generally want markets to work. And if they’re not working, if there’s lack of competition, that has effects not just in the short run but in the long run. So if you suppress the demand for nurses, you know, that affects how many nurses go to nursing school. And then, when we all get old, there’s nobody to take care of us. And we’re gonna whine about it. Where are all the nurses? Why aren’t there any nurses? Well, if you’ve suppressed the demand for nurses today, we’re interfering with the functioning of markets, with the core functioning of markets. And we want markets to work well, we want competition to work well, throughout our economy. And the logic doesn’t change when they’re buyers or sellers here. The consequences are equally impactful.

Finally, I just want to come back to market realities. Market definition is a challenge. And again, that’s already come up in a couple of the conversations today about, you know, how do you think about things? Leemore mentioned this a little bit. There can be a little bit of a tension between saying, on the one hand for price competition, like hospital competition might be very local. But she raised the fact that there might be another type of competitive issue that is not geographically local, and there can be a lot of risk of confusing people if you start raising these ideas. I mean, the economics are completely clear. You’ve got one type of problem, and one type of theory of harm. And of course, if you engage in this behavior, you can raise prices. And that’s consistent with economics. It’s just that it’s a little bit, it’s not fitting into the nice little round hole of, like, one merger, one market, you know? We’re looking at price changes in the short run by competing for the customer just driving to this hospital or that hospital. But it doesn’t make it wrong from an economics perspective. And so we need to just make sure that we don’t get in our own way of actually mapping the economics into enforcement.

So, you know, the 2010 guidelines, we’ve got Carl here who helped draft them, they were really quite clear. It’s sort of like, how are people so confused? The 2010 guidelines were clear. They said that antitrust markets don’t need to have precise metes and bounds that showed up in court cases. The idea that there should just be one true market, that’s not an implication of how the whole process works. It’s not an implication of the hypothetical monopolist test or anything. But somehow, we can still find ourselves getting stuck in this idea that, “Well, it’s simple, it’s easy just to have one true antitrust market.” It just sounds like a very appealing idea. When in fact, what economics can help you do is figure out, “Is the market too narrow or too broad?” It can help you make sure you shine the light on where the competition problem is. And of course, there could be different mergers even in the same industry that could lead to different markets. And the hypothetical monopolist test is absolutely consistent with that. There can be many markets that satisfy the hypothetical monopolist test. So, given that people do seem to keep getting confused, there’s maybe some benefit to making sure that the logic is clear.

For example, if you’re going to define a market toward some threshold. There’s some continuous characteristics, the products or their location, and you’re going to pick a threshold and a cutoff. Well, we already know the courts have acknowledged that markets don’t have to have precise metes and bounds. But, wait, why is it, again, that it makes sense that the cutoff could be fuzzy? Well, probably the things right at the cutoff are not creating as much of a competitive constraint on these merging parties as things that are in some way closer. So there’s a clear logic to why that boundary may not make that much of a difference. So just making clear not just, as sophisticated people understand, that there can be more than one relevant antitrust market, but making sure that people can understand why, so that we’re not afraid to use the tools and logic that we have, but that we do use the right tool for the setting.

Those are just some of the things that I’ve been learning about and thinking about in approaching this problem. It’s a complicated scenario of trying to make sure that we get the economics right in these complicated settings. But I believe that that’s something that we can do, and we shouldn’t be afraid of doing. And I’m hopeful that we can do an even better job going forward in these more complex scenarios. Thank you.

Tommaso Valletti

Thank you, Susan, thank you so much. Great, great keynote, a lot of questions I’m sure everybody already has.

I liked the message where you said that ultimately, economics and business strategy need to work hand in hand with the ultimate goal of explaining facts. That’s very simple, difficult to disagree, but also very powerful as a direction. I may ask you later on, how do we do that? And that’s a different question.

But I will start with a question that may sound personal. It is actually personal. But it’s always so interesting. So you are an accomplished scholar. I mean, that’s an understatement. You’re a stellar scholar, stellar by all standards and definitions. So you could choose to do whatever you want, basically, at this stage of your life. You were working with industry, you had a long relationship with Microsoft. You could have continued that. And instead you decided, at some stage, to join the DOJ Antitrust Division. I’m sure, that’s my guess, but I’m sure you had a substantial pay cut, to the horror of the economists in the room! So it’s really asking you about the motivation for doing this, and doing this now.

23:41

Susan Athey

Thanks for that question. And maybe I’ll back up a few years to what I was starting to do before I came to the Antitrust Division, and maybe it’ll, kind of, it seems like a very natural continuation.

So, in some sense I always felt like I had been so lucky. Because I’d gotten all these experiences that taught me things. I got to see a search engine when nobody knew how it worked. I learned about what machine learning and AI were when most of my economics colleagues were telling me I was wasting my time and, like, data mining wasn’t interesting, and this wasn’t important. There was really quite a bit of negative feedback. And actually a lot of the platform antitrust ideas were also initially not getting a strong reception, at least with really broad audiences. But over time, I just was very fortunate to learn on the ground, to be one of the first economists to really understand how a lot of this technology worked, which then got me interested in many, many problems. It got me interested in news, the news industry. It got me interested in manipulation of information. All sorts of topics that are very natural consequences once you understand what’s happening behind the scenes.

But I was still trying to figure out how to actually translate that into both research and impact. And so I tried my hand at a couple of things. I tried to help build up the infrastructure at Stanford Business School so that my colleagues, I knew it was coming, they were going to be working with big firm datasets. They needed infrastructure, they needed support, different kinds of research assitance, just to get their work done. Firms had that problem. Our researchers were going to have that problem, too. I could forecast that people were going to work in bigger teams and interdisciplinary teams.

Then I worked at Stanford to help found our Institute on Human-Centered Artificial Intelligence, which was also an interdisciplinary group. And that was also trying to get ahead of all of these changes, and think about hopefully having some thought leadership ahead of time. And that was partly responsive to my experience in the past when a lot of this technology and tech firms came in and, frankly, we were not prepared as academics. We didn’t have the right framework. We were not ready to embrace the full complexity of what was happening to us. We had the tools, but we hadn’t put the tools together to really find the conclusions. And then I also started a lab more recently that was trying to use what I call the tech toolkit to work with social impact organizations to build things.

And so if you think about, like, the theory of change of all of this, like with the tech toolkit, working with an edtech company, or helping a training company that’s helping low income people. This is like, really in the weeds, on the ground, building stuff and trying to make it work. And that’s wonderful. And there’s a theory of change that other people learn from it. And you can kind of help this whole technology diffuse into general purpose technology ideas, but it also feels like it’s really one little thing at the time.

And on the other side, of course, economics can do regulation, it can shape incentives, it can create incentives for innovation, which has been another part of my research. And so how does all of that fit in?

When this DOJ, job came up, it felt like kind of like the natural bringing together of all of these ideas and these different theories of change. Because on the one hand, each merger comes in and like, what happens is going to change next year how much you pay for healthcare, it’s going to change next year whether you have flights to your cities and how much the tickets are going to cost. There’s like this really direct, immediate impact going from really, really, really deep understanding of the facts of one scenario. But on the other hand, the fact that we have gotten more proactive is changing incentives economy-wide. There have been a lot of reports of people saying that mergers are not getting brought, especially mergers that have competition concerns: you can choose to acquire company A or company B, and company A is going to have a long risky process and company B is less of a competitive concern. And if you’re steering people to make deals that are bringing less competitive concerns, that’s really an outsized impact, well beyond your cases.

Another thing that we have going on right now is actually like a once in a generation opportunity, that Biden has this executive order for a whole of government approach to competition. And this administration has been really serious about that. Some of it you see, some of it’s visible. Some of it’s behind the scenes. You know, all throughout government there are regulations and policies that are focused on one purpose, like safety of something, and the competition implications are not always as beneficial. Figuring out how to fix some of those problems was just really inspirational to me, and that’s kind of a long-lasting shaping of the economy. So that’s been really exciting to be a part of.

So all of that together. It’s like, we’re bottom up, we’re in the facts, we’re in the weeds, our hands are dirty. This isn’t just like saying, “Please change this policy,” and maybe in ten years Congress will pass a law, but maybe not. We’re doing stuff right now, but we are also shaping things more broadly in a way that I think could be beneficial.

And then a final thing is that actually, just like I’ve done in a couple of previous jobs, my lab and the GSB and the Institute for Artificial Intelligence, we’re actually building human capability. And that also can have a lasting impact.

Tommaso Valletti

So, on this, on building capabilities across fields which involve data, technology, economics, Can you elaborate a bit more? What does it involve in the DOJ? What are you doing in that respect?

Susan Athey

Yeah, so I mean, I think one analogy is, think about it. We’re studying businesses. Later in my career, I went to a business school. And one advantage of being in a business school is you’re surrounded by people who are still using the same tools and frameworks, but are often more specialized. They understand corporate finance. They understand venture capital. They understand, in much more detail, marketing and how it works and what the impediments are. Customer acquisition, how that’s a challenge for businesses. So this type of whole-of-business approach is actually, understanding business strategy, it’s really important for predicting and understanding what firms are doing and why. So of course, a core discipline in a business school is economics. But there’s also behavioral science. There’s operations, statistics, now a lot of machine learning, sociology. All of these disciplines are present, and they inform the business disciplines. And so as we’re seeing today, both academics and firms are also putting together multidisciplinary teams. Especially, like, the technology firms. They’re hiring economists, but they’re hiring psychologists, they’re running behavioral experiments, they’re hiring ethicists to try to understand the implications of what they’re doing. So, that type of interdisciplinary approach. It makes sense in academics. It makes sense in business. And it makes sense for those studying businesses.

We’ve been trying to expand our capability through a variety of tactics. One is just visiting professors, visiting experts who have expertise in, maybe, operations research or AI. And another is, we have a Chief Technologist who is a postdoc in computer science. Following on that success, huge success from that, and also other agencies around the world bringing in chief technologists, we’re going to be building out a team on technology policy of people who have backgrounds in technology. And it’s just amazingly useful, like, if you have a merger that involves AI or software or something, to have somebody who really knows deeply how that all works, and has actually done that work themselves at some point. You just get to the right answers. You get to ask the right questions so much more quickly.

So, there’s the different business disciplines and experts. There’s the technology policy. A third pillar is just the data support. My colleagues, you know, they used to run regressions in Stata on a PC with a nice rectangular dataset. That’s not the world of research anymore. And we’ve had to get better, you know? Cloud computing. Scalable resources. You might hire a master’s student in computer science to help you scale up your algorithms, even as an economist. Well, we’ve got the same issue in enforcement now. We’re getting very big datasets, we’re studying firms that are using that data in complex ways. And we’re ingesting large datasets. And so that requires the computers. Like, if your computer can’t handle more than, you know, 100 gigabytes, well, you’re outsourcing that work, and you’re gonna have more distance from the work. So you need the technology. You need the people, the data engineering, to get the data into the form where it’s ready to use. And then you also have additional issues and new opportunities. Machine learning can be used in different ways with larger datasets. And the old traditional things that we did can also be done more richly with larger datasets. And of course, a great side effect of building that capability internally is that, as we encounter businesses using data in the same way, we have in-house knowledge and experience about what’s possible and what’s not, and how it works, and what the barriers are in terms of people, capabilities, and so on.

So this is something that every, all of government is having to cope with this. It’s really difficult in government to hire. It’s difficult to procure technology. There’s lots of things that are challenging. But if we build a good foundation, I think it can really impact the work we do in the future. So we’re in process with that. And it’s been really, really exciting for me. We’ve gotten a lot of best practices from agencies around the world, as well as other agencies in the federal government.

Tommaso Valletti

I, that’s great, thanks. Another question. Probably the last questions are about technology because you’re really best placed to understand technology. You are the Economics of Technology Professor, so we need to ask you about it.

Susan Athey

I didn’t choose that title, by the way.

34:13

Tommaso Valletti

So what about artificial intelligence? AI is a very big topic. But in particular, since you really understand it well, which issues has it raised from an enforcement perspective? So, from the perspective of an enforcer, I mean, are they classic issues? Are they new issues? What does AI, how does it change? How does it affect enforcement perspective?

Susan Athey

So this is why it’s so fun to talk about this now. Because if you said the same question one year ago, people would understand you meaning something totally different than what you understand today. So it’s always exciting when the world gets flipped upside down, as it has been by the large language models.

I think at least we’re in a good position to avoid some previous mistakes we made. I feel like it took quite a while for the antitrust community broadly to wrap their heads around machine learning and big data. People, for a long time, they just threw around the word “data.” And there wasn’t a lot of distinction between, you know, data about users, and the ability to run A/B tests, versus data that you could buy, or data that you could sell. And it really matters what the type of data is, and how it’s going to be used. How it affects market structure. Whether you have unique access to data or unique access to users or a larger set of users, versus this is something that you can potentially buy. So these are, I think, slowly over like twenty years of regulating technology companies and just everybody getting more familiar with this in their life and understanding what this stuff means, we’re in a much better position to think about this. I mean, I remember having to explain, like, what is an A/B test? And like, “I don’t know what you’re talking about.” So I would say, “Well, you know, these companies are running 5000 experiments a year.” And they’re like, “What?! I thought only medical doctors ran RCTs!” So, we’ve really just come a long, long way in understanding the vocabulary and the conceptual framework.

I think, ultimately, it’s sort of like the answer with platforms. Like, the concepts were there. But there are a few steps of logic to come up with useful taxonomies that really highlight your focus on fact patterns and common competition concerns. And luckily, I think we’re a lot further along that journey when this new thing comes out.

So, what are a few things I like to think about?  Those of you who haven’t, like, dug into how this works, although probably everybody has because you just read the newspaper and you can’t avoid thinking about the large language models. But something that you might not have as much exposure to just from testing it out is how important something called “fine-tuning” has been.

The last couple of years of AI conferences, you’ll see hundreds of papers taking the latest large language model, which has been trained on publicly available data usually. And it’s like a big, in order to get so good, it needs to use just all the data it can find. And that’s how it got so good. And remember, the way it’s trained is just by predicting sequences of words. You see these first three words, and then you predict what words comes next. You see these first five words, and you predict the next four that come next. So there’s nothing special. That’s all these models do. That’s all it is, just predicting, predicting, predicting. But it just turns out, if you put in a couple hundred million parameters and you have a whole lot of data, predicting what words comes next takes you quite a long way. Once you have these big general purpose models, you can fine-tune them using another data set that might be more specialized, to make sure that that model does well on your particular data set. Before all this broke, I had a little baby version of this in my research; it might be easier to understand for an economist. I was looking at sequences of jobs. And I got this big dataset of millions of millions of resumes. And I applied the same methods, the same general purpose methods, to predict which job comes next after a sequence of jobs. So that’s on a big, nonrepresentative dataset. But then I took it over to a government dataset. Representative surveys with really reliable demographics. And you train it there to kind of fine-tune it. What it does is then, after you fine-tune it, you predict well on that dataset. And the fine-tuning process is actually like super simple. Like, when you’re trying to estimate parameters of a model, you’re just finding the ones that fit the best. And then at some point, you stop. And with fine-tuning, all you do is you just keep doing more steps on a smaller data set. And so you can do it on a smaller computer with a smaller data set. And just magically, somehow, this really works well. And so there are hundreds of papers each year doing that kind of fine-tuning.

And why am I telling you all of this? Well, when you think about what’s going to happen next, one thing that is going to happen is that these things work generally pretty well, but then you try to use them and they make up references. Like if you ask it papers I’ve written, it sounds so good. They’ve got the journal name right, the title sounds sensible, maybe one of the people, my co authors. But it’s actually nothing that I’ve ever written. So that’s what you get when you’re just predicting text. And if you were a lawyer, you would actually want to get, or an academic, you would actually want real citations. And to make it worse, it’s like a mix-up of real citations and fake citations. So you still, you ask it “Are those real citations?” and it will say, “Oh, yes, go look on Google Scholar,” and you go look on Google Scholar and it’s not there because you didn’t write the paper.

So it would be really useful to have something that’s, like, actually accurate. And so we’re seeing already lots of companies trying to solve this problem in different ways. It’s like using this as a foundation, but then either layering something on the training, like fine-tuning on a specific dataset, or layering something afterwards, post-processing it to make sure that you get reliable results. And so, the businesses that are gonna go around this are going to be sort of needed to make this really the best use, like, application by application.

So then that leads us to, wow, what is this competitive market? What is this market going to look like from a competition perspective? How many of those big models are there going to be? How many of them are going to make use of proprietary data like your private text messages, or your internal review databases, or some other proprietary data that you have access to that can make your model better than other people’s models? That’s one set of questions. How important is the fact that you serve lots of people in the same industry going to be for making this post-processing work well? How much of it ultimately gets integrated into the main system?

These are all things that, we don’t really know how this is going to work out. And the scary thing is that, even at Stanford, we started a big project on foundation models a couple years ago, but people just don’t know. Like, they don’t know how they work. They don’t know how to make them safe. They don’t know how to make them private. There’s not really good ways to know whether it’s going to spit out some particular string of input that somebody put in. You know, if you write the story of your life into it, and then ask it to answer a question about it or write in your style, is it memorizing something private about you, that somebody can hack the system? And of course, we’ve seen all these examples of it getting kind of hacked by, like, trying to make it talk to you. Make it your girlfriend or boyfriend or whatever, and you start confusing it.

So it’s clear that it’s really difficult to make guarantees for the people interacting with these things, at least without more work around it. So these are all things that, everybody is all in the same boat. The people making it, the people analyzing it, the people doing scientific research. Everybody’s trying to figure out all at the same time, how is this going to work? What are its characteristics? And how can we make sure that it’s safe and beneficial and competitive? The thing that makes me cautiously optimistic, though, is that we’ve seen the consequences of being really slow to figure stuff out. And so I think there’s a lot of energy to educate ourselves quickly.

42:08

Tommaso Valletti

Great, so I’m going to ask a last question, and then get ready with yours. Get ready, not yet. The last question is this.

What did strike me in your keynote is that you mentioned a lot of times the word “complicated.” Complicated theory of harm. Complicated patterns. Complicated economics. Recently, Jonathan Cantor, who you’re working with, said, and I may be paraphrasing but this was the spirit, he said, “Too often, we fit data to models. And instead, we have to make sure that models fit reality. Fit the data.” So, the other way around. Which to me, but this is my interpretation, I didn’t ask Jonathan what he meant, to me this is a criticism, actually, of the way we do things in economics, and in economic policy especially. We basically get converged sooner or later on a little model, and then the various parties try to feed data to that model and see where the model goes, irrespective as to whether the model then predicts the reality or not. It becomes a discussion on the model.

So I would like to know your views on this more complicated stuff, and at the same time making sure that we can fit the reality rather than discussing models. So is there a contradiction there? Because sometimes it seems like you want to bring even more sophisticated economics, even more complicated economics. In policy. I’m not talking about research. Research is a different thing. The timing for research is different. So, in the timeframe that we have in policy, where does it lead us? What kind of economics is going to be helpful with these complex environments?

43:51

Susan Athey

So maybe I’ll start by saying that there’s nothing more complicated to work with than the wrong model that leaves out the most important things. And I think that’s true in academics, in business, and in policy. If you have like just “square peg, round hole,” you can spend hours in conference rooms going in circles because you don’t actually have the right framework to discuss the needed complexity. I started my career as an economic theorist. Oliver Hart, an inspiration and role model is here, people like Oliver and Jean Tirole and others, Patrick Rey’s here. In some sense, the beauty of applied theory is getting just the right level of abstraction that captures what’s important. And these brilliant role models all know that you can’t put everything in every model, and you have choices. But sometimes actually, economic theory has been too simple to be useful, because there’s a lot of incentives to focus on one thing at a time.

I’ll give you an example. Like with platforms, a lot of the literature, still today, assumes either that participants exogenously use exactly one platform, or they exogenously use all platforms. They don’t model the choice that participants make about how many platforms they should participate with. Well, if you go out and try to operate in one of these in a competitive environment, well, that’s what it’s all about. The whole thing is about affecting people’s participation across multiple platforms. About, if you’re small and growing, making it easier. If you’re big and entrenched, making it harder. You know, it’s like not rocket science. Now, I have research papers, theoretical research papers that try to deal with this, and the math got messy. And maybe if I had been better, or stuck to it longer, I would have come up with beautiful math that captured everything, but I didn’t. But, just leaving it out, then people use the shorthand, and they forget the assumptions they make. They say, “Oh, well, in this setting, of course, we all know X, Y, and Z happen.” And they forget that there was an assumption they made that killed all the interesting economics, right?

So the right level of complexity is the answer. You can go so far wrong and waste so much time, again, across all of the three domains, policy, business, and research, with the incorrect model that leaves out the core of the matter. You just can’t even have the right conversation. I think the challenge, why it’s hard work, is to figure out what are the common fact patterns, what are the things that are important over and over again? And then have a framework that does incorporate the complexity that’s driving the behavior you’re interested in, but it is still something the brain can wrap their head around.

But I mean, as our applied theorists here know, writing a beautiful research paper on applied theory can take years, sequences of papers, to really get it crystallized so that, yes, in like, one paragraph, your students can understand it. And so, it is hard work. But hopefully, the output is not complex to the reader. And to the listener. It’s just a lot of work to boil it down to make it so that it’s not complicated.

But when you get it right, it all makes sense. You just tell the story. And you’re like, “Oh, yeah, of course. That’s why that company did that! Sure. Yeah. Right.” That’s success. But achieving that success is super challenging intellectual work.

Tommaso Valletti

Very good, Susan. It’s very difficult to disagree with you. Let’s see now from the audience, we will get lots of challenging questions. I’m also asking Filippo, we started five minutes later, do we have until five past or they want to finish on the dot? Okay, so the first I saw Barry there, and then yeah, so if you,

one, two?

47:54

Audience (Barry)

Susan, I just wanted to start off by talking about a presentation you gave, probably around 2010. I saw it at the American Antitrust Institute spring meeting in June, in Washington. And it was a really powerful description that you presented. This was when you were working for Microsoft. It was a really powerful critique of the Google business model, which was based on sort of a combination of having a platform with monopoly power, a platform that has a monopoly on certain types of data, and then also a license to discriminate, and its treatment of each individual company and each individual person, hence to manipulate each individual company and each individual person. It was kind of a hellscape that you were describing in this description of the Hal Varian business model at Google. So, you know, it was a really fantastic presentation, and it blew the people in that room away. So my question, I have actually three intertwined.

Tommaso Valletti

Barry, one question, honestly.

Audience (Barry)

Isn’t the failure to apply nondiscrimination to Google and Amazon a main source of the complexity that you see in the world today? Is this not why Congress established the modern antitrust regime on a foundation of nondiscrimination in 1887 with the Interstate Commerce Act? And then, why is nondiscrimination apparently not part of your toolkit today? Whereas what you say, that the main tool should be economics models sort of making sense of what corporations want to do.

Tommaso Valletti

The three questions are related to each other, good. Okay, quick answer. And also the other questioners, please try to be short so I can give to the next one.

Susan Athey

So I think there’s multiple answers to that. And of course, there are multiple tools across not just antitrust but consumer protection, and we’re seeing a lot of new regulation around the world. But just within the confines of what we already know how to do. Discrimination can be a way to lessen competition. And an acquisition that makes it easier to discriminate can both harm competition within the market that that acquired product competes in, but it can also implicate competition between platforms.

So I think it does fall under substantial lessening of competition. But it’s a little bit complicated, but maybe not too complicated.

50:59

Audience (Brooke Fox)

Brooke Fox, managing editor of ProMarket. You’re all getting a little sneak peek, because we’re planning to do a series on computational antitrust, as well as the flip side, algorithmic collusion. And Jesse’s publication from ProPublica did this beautiful investigation into rent prices being fixed using algorithms. And so I guess, as an enforcer, I’d like to hear your comments on both sides of that coin. Both, you know, algorithms colluding to raise prices, as well as how enforcers may be utilizing algorithms to combat some anticompetitive behavior.

51:39

Susan Athey

That’s a rich question. I don’t want to comment on any particular company or, you know, matter. But generally this is going to be a tricky problem to completely solve perfectly, because the use of algorithms is becoming ubiquitous, and they can have intended consequences and unintended consequences. So coming up with a good framework that sorts all this out, I think is going to take, this is a great place for more research, more writing, both on the legal side and on the economics and algorithms side.

I certainly am concerned. I used to not be as concerned. But I think as we’ve seen research and facts play out, and just as algorithms use gets much more common, there’s a lot of ways they can lead to higher prices or, you know, other kinds of either tacit or explicit collusion. So that’s something to be thinking about. It’s exciting to see this becoming more of a good topic of research and conferences and writing.

I think in the race between enforcers and the world, the enforcers are going to be somewhat behind. We’re trying to catch up, but it is very hard to, just in general. Like, we’ve had this detection. One of the first topics I worked on, when I was 17 and working for Bob Marshall at Duke, we were trying to think about detecting collusion at auctions. And you can make a lot of progress on that. But if you have very sophisticated firms on the other side, they also have a lot of ways to evade it. Now at some point you might have thought, well, maybe people aren’t that sophisticated, they don’t figure things out. But this combination of “it’s actually the algorithms” and the fact that the firms are doing a lot more explicit optimization and hiring sophisticated people to use sophisticated software to do sophisticated optimization, really, anything is possible. So yes, so Stigler, please keep sponsoring work on this.

Tommaso Valletti

Great. Question up there.

Audience (Edoardo Peruzzi)

Hi, I’m Edoardo Peruzzi, a graduate student in economics. Thanks for your talk, I would like to ask you, I think there is some evidence that courts sometimes have troubles in understanding relatively simple economic models. And you were stressing a lot of this stuff of being accurate. Embrace complexity.

I would like to ask you, how can you be complex but also simple enough to be understood by judges and jurors, which are not specialists in economics, in court litigation?

Susan Athey

I would just come back to, like, it’s really confusing if you have gotten some idea of what’s going on, and you see business documents and business executives talking, and the economist is talking about something completely different. That’s confusing, and it may not be credible.

Of course, business documents can say all sorts of wild things that don’t align. So, we have to be very careful about how we interpret them. Like, everybody thinks they compete with the big tech company or something, even though they’re doing something very different. So you have to be a little bit careful. But at the same time, if your narrative is not aligned and the economic logic just doesn’t account for the most important facts that seem to be driving behavior, I think that’s confusing and that’s complicated. Because now you’re trying to keep two worlds in mind, the world that you think you understand and this other weird world. So, yeah, it can be hard to accomplish. But when you do accomplish it, it makes sense.

Like, and I’ve told this, I teach MBAs that don’t have a lot of technical background. But they can get, like, if they’re trying to start a marketplace business, solving the chicken and egg problem, thinking about strategies to get the buyers on board, the sellers on board. They get that. That’s not complicated. They would think it’s weird that we had a hard time naming it. Like, that’s just, “Of course! That’s how the world works.” So, what’s the definition of complexity? It’s in the eye of the beholder. And things that make sense and are correct are often not complex.

56:05

Tommaso Valletti

I slip to question maybe three. One is there, and one is there. And then also, from the right, I always take questions from the left. How’s that?

56:14

Audience (Dave Dann)

Hi, Dave Dann. So you mentioned the competition order. And one of the facets of that was interagency cooperation. Not just on specific cases, but also sharing of information, industry data, designing remedies, investigations level. I’m curious how that looks in practice. I mean, you can’t tell me everything, obviously, but you know, how wide and deep does it go? Is that at the level of economists? Are you talking to other economists at the agencies? Are you engaging in many of the things that are articulated in the order? Are there talks about what industries look like? Are you sort of imparting that information into other agencies?

57:12

Susan Athey

Yeah, all sorts of things. There are details back and forth between different agencies, both by antitrust lawyers and antitrust economists. And, as well, getting people from agencies that have industry knowledge to come in and work with us. I think part of it is picking people. Some of the top leadership of organizations just have had, or, there’s a little bit more attention to how the government is affecting the end consumer, rather than just the industry that’s being regulated. So I think it’s leadership. It’s passion. And it’s a lot of commenting and behind the scenes conversations. It’s making priorities clear. And it’s almost, like, just asking the question. I mean, some of the things you can leave behind in all of this is, if there’s a checklist of something some agency looks at, like, if you put competition on the list, it might not have always been on the list.

And so again, this is kind of just a little bit of framing to make sure that, if you’re a specialist in safety of some kind, whether it’s safety of banks or safety of airplanes or something else, some of these other ideas might seem abstract or like something that you don’t really connect the dots between the decisions you make and those things. And I think it’s been just a lot of energy from people in the White House to kind of make it happen. Like, if there’s a problem, there’s the idea that the leadership is there to help break through logjams. So, it’s real, and it’s exciting.

Tommaso Valletti

Ioannis?

Audience (Ioannis Lianos)

Hello, Ioannis Lianos, Hellenic Competition Commission. You mentioned before the difficulty of defining markets, and you maybe challenged a little bit the idea if we need to define markets. You mentioned that we need to have more complicated, more sophisticated I will say, economic analysis and a more case by case assessment. Now, my question is, when you draft guidelines, obviously you need to have some form of generalized principles that will apply to all possible cases. And in the US context, I would say that you probably have to draft guidelines in the absence of cases with the Supreme Court. So in a way, you will have some form of potent guidance to give to the enforcers. What in your view will be the performance indicators of these guidelines? What will make successful guidelines? Is it possible to draft guidelines in this much more complex environment you’re describing? Because guidelines are usually a simplification, isn’t it?

59:58

Susan Athey

It’s hard, yeah. It’s a challenging task. But, I think coming back to, like, the principles are common, it’s just that sometimes people get confused along the way. And again, some of that confusion can be created intentionally in the adversarial process. So I think it’s sort of heading off the places where things could get confused. And just making clear that these general principles really do work, even in more complex situations. That you don’t have to put the square peg in the round hole in order to apply the principle, the principle also applies to the square peg

1:00:34

Tommaso Valletti

That would allow for one last question.

1:00:37

Audience (Ernesto)

Thank you. My name is Ernesto, I’m a graduate student in business. And I have a question very similar to David’s, but particularly focused on the coordination with regulatory agencies that also act as gatekeepers. And in your experience. Like, there is an obvious connection between competition and those kind of agencies. Has the focus been on making the people that act as gatekeepers just understand that, as you said, connect the dots? Or is it that they have maybe more complex problems, or different priorities? And like, specifically, think about financial regulation and stability. Like how has that work been done?

Susan Athey

I guess I could just say, without commenting on anything specific, what I think success looks like. What success might look like. It’s just that sometimes there’s low hanging fruit. Like, you can accomplish the primary objective of an agency in a way that preserves competition rather than hurts it. And sometimes there are trade-offs, but we’re not really at the frontier between competition and safety. Like, poorly designed regulation can be inside the frontier. And so you’re looking for those opportunities to push out the frontier. And that’s the lowest hanging fruit.

Sometimes you may have to make trade-offs, too. But first, let’s just make sure that we even have a good definition of the frontier. You can’t make the trade-offs if you don’t have an analytic framework and the conceptual framework on your list to think about the trade-off.

1:02:20

Tommaso Valletti

Ladies and gentlemen, please join me in thanking Susan Athey for this great conversation. Thanks a lot.