Aaron Edlin and Carl Shapiro respond to Federal Trade Commission Chair Andrew Ferguson’s keynote speech at the 2025 Stigler Center antitrust and competition conference, in which he lays out his approach to regulating the content moderation policies of the major social media platforms. They explain why Ferguson’s approach threatens the exercise of free speech, is inconsistent with antitrust law, and politicizes the agency.


The current chair of the Federal Trade Commission, Andrew Ferguson, recently delivered a speech at the Stigler Center’s 2025 antitrust and competition conference explaining the motivation and goals behind the FTC’s investigation into the content moderation policies of social media platforms. His primary concern is that “increased concentration [in the social media space] can negatively impact the marketplace of ideas because it facilitates a variety of censorious practices; and censorious practices—whether carried out by state actors, private aggregations of power, or a combination of the two—is inimical to the free expression that makes our marketplace of ideas possible.”

In February, soon after Ferguson took over as chair, the FTC issued a Request for Public Comment Regarding Technology Platform Censorship. The FTC’s stated goal is “to better understand how technology platforms deny or degrade (such as by ‘demonetizing’ and ‘shadow banning’) users’ access to services based on the content of the users’ speech or their affiliations, including activities that take place outside the platform.”

Ferguson asserts that he is a champion of free speech, but his actions as FTC chair show the opposite. He is using the FTC’s powers to discourage or prevent social media platforms from freely choosing which content to include, feature, promote, or monetize. Those choices are an integral part of how social media platforms compete. They are also a form of speech.

Moreover, Ferguson’s approach is highly partisan, as exemplified by his explanation for why he is not concerned with the dangers of misinformation: “This is not only because I categorically dismiss elite and democrat [sic] hysteria over misinformation altogether, but also because I believe that a highly concentrated social media space generates far more opportunities for elite and left-wing manipulation than it does for populist so-called misinformation.” He has previously made clear the partisan basis for his views, stating: “‘Misinformation,’ of course, being Newspeak for ideas and speech inconsistent with progressive orthodoxy.”

Ferguson’s partisan use of the FTC’s powers should concern everyone who favors the rule of law and everyone who opposes government interference with free speech.

We now discuss the proper role of the FTC in regulating content moderation under the FTC Act, contrasting it with the approach that Ferguson has taken so far as chair of the FTC.

Competition among social media platforms

Content moderation policies are an important aspect of how social media platforms design their products. Content moderation on any large, modern social media platform is complex and ever-changing. At a minimum, each platform must devote resources to blocking illegal content, such as child pornography, and it must respond to notices about copyright infringement. Beyond that, each platform must determine which content it regards as sufficiently offensive to block entirely, to promote or demote via its algorithm, or to not monetize with ads, and how it will control spam. In the language of competition policy, all this falls in the category of product design.

Competition among social media platforms will lead to some variety in content moderation policies. One platform may tightly control the speakers that it hosts, while another may be open to many speakers. One platform may exclude content based on its violence or sexual content, while another may have no such controls. One platform may remove content that its advertisers find offensive, while another may not. One platform may feature right-wing voices and another left-wing voices. One platform may cater to Christians and another to atheists.

At the same time, competition among social media platforms will likely also lead to some similarities across platforms, simply because they may have similar goals, such as exploiting network effects or attracting a wide variety of advertisers, and thus may independently choose similar policies. They also can observe each other’s policies, so one platform may learn from the successful policies adopted by others.

The mix of content moderation policies resulting from competition among social media platforms is hard to predict. Some may hope that a less-concentrated market would lead to better civil discourse, but that is by no means a foregone conclusion. In “Why Breaking up Facebook Would Likely Backfire,” we argued that competition in a market for “bads”—like the attention that social media platforms secure by being outrage machines—is likely to spur a race to the bottom and produce more “bads,” just as competition in a market for goods produces more goods. Regarding political speech, competition for consumer attention may well lead to a number of polarized outlets with shrill voices rather than centrist ones with sober voices. 

In any event, so long as the social media platforms are choosing their content moderation policies unilaterally, and so long as they are not engaging in deceptive practices, the FTC has no role in regulating these decisions. Period. This proposition should not be controversial.

This same conclusion applies to a “monopoly” social media platform, by which we mean a platform that lacks close substitutes for at least one category of participants, such as content providers, users, or advertisers. Antitrust does not limit the ability of even a monopolist to make unilateral product design decisions that do not exclude rivals, any more than it limits a monopolist from charging high prices, in part because courts are ill suited to regulating such conduct. The prospect of having a court (or the FTC itself) in charge of the content moderation policies of a major social media platform is unappealing, to say the least. This is true even if the policies adopted by the “monopoly” social media platform are anathema to some groups.

While the FTC has no role in regulating the unilateral, non-deceptive content moderation policies of social media platforms, under certain circumstances it could have a role in regulating either (a) collusion among, or (b) deceptive practices by, social media platforms regarding their content moderation policies.

Collusion

Ferguson emphasized the danger of collusion in his interview with Professor Eric Posner following his speech:

I do not think that a particular platform’s propensity to censorship is categorically an indication of monopoly power, or is categorically an abuse of market power. Honestly, the sort of social media problem that has concerned me more before the purchase of X was the sort of eerie similarity of the censorship policies across all of these platforms, including the almost identically coterminous decision to eject Parler from the online world entirely at the exact same time, to get President Trump off of all the social media platforms within a couple hours of each other. The risk of collusion—which is made easier by an absence of competition—is what concerns me more, and I’ve written about this.

Ferguson’s example of Parler, a social-networking platform associated with conservatives, badly undermines his own argument. In January 2021, it was reported that Parler had been used to coordinate the January 6 attack on the Capitol. In response, both Apple and Google removed Parler’s mobile app from their app stores, after which Amazon Web Services stopped hosting Parler. Ferguson does not point to any evidence that these decisions resulted from these firms acting in concert, or that they acted against their own unilateral interests. To the contrary, it would seem clear that each of Apple, Google, and Amazon, acting unilaterally, had a great deal to lose, and very little to gain, by continuing to offer or support an app that was used to organize an attack on the Capitol, an attack which threatened the peaceful transfer of power in the United States. Without more, a claim of collusion would surely be dismissed under the pleading standards established by the Supreme Court.

Ferguson seems to acknowledge that there is insufficient evidence on which to base a complaint that any social media companies have engaged in collusion regarding their content moderation policies. In his interview with Posner, he falls back on the idea that the “social media space” is concentrated and thus the danger of collusion is high, so investigation is warranted: “The fact that there is coterminous identical conduct—this is true across all of antitrust law—does not mean there’s an antitrust violation. Where there is smoke, there is not always fire. But there might be. And the whole point of having the antitrust enforcement agencies is, when you see smoke, at least take a look.”

But this argument fares no better. Generally speaking, a large social media platform has a unilateral incentive to block content that offends many of its users or advertisers. Therefore, there will typically be no basis for inferring or even suspecting collusion in cases where several platforms adopt similar content moderation policies in response to dramatic public events.

Another way to see that Ferguson’s logic is flawed and his approach is unprincipled is to apply similar reasoning in other areas. Suppose in response to distressing press reports about child labor used abroad, several of the largest firms selling garments in the United States announce that they will not sell garments made using child labor. Would Ferguson suspect collusion in that case and launch an investigation? We doubt it.

Ferguson further undermines his argument that market concentration is a critical trigger for investigating the content moderation policies of the social media platforms by stating in the Posner interview that the risk of an advertiser boycott targeting social media platforms based on their content moderation policies “is real and needs to be confronted and taken seriously.” He says this while acknowledging that there is a very large number of advertisers on social media platforms. Nevertheless, he asserts that advertisers can “all get into a back room and agree.” Ferguson’s approach here does not reflect the standard typically used by the FTC to investigate possible collusion, especially given the obvious unilateral incentive for an advertiser to avoid sponsoring content that many of its actual and potential customers find offensive.

Sincere, nonpartisan competition concerns do not explain Ferguson’s interest here. We see no “smoke.” Nor has Ferguson pointed to other indications of “fire” warranting an FTC investigation. 

Deceptive practices

The FTC has a legitimate role to play to ensure that social media platforms do not engage in unfair or deceptive acts and practices, including those relating to content moderation.

Ferguson is sympathetic to conservative speakers who complain about being excluded from social media platforms or whose access has otherwise been degraded through “demonetizing” or “shadow banning.” However, a platform can easily antagonize certain speakers, and deprive them of substantial revenue, simply by enforcing its stated content moderation policies. Indeed, it might well be deceptive not to block content that is prohibited under the platform’s stated policies.

We have no quibble with the FTC investigating to see if a certain social media platform engaged in deceptive conduct regarding its content moderation policies, so long as there is a valid basis for suspecting that such conduct was taking place and was substantial. The two questions in the FTC’s Request for Public Comment that specifically relate to deception and due process, standing alone, strike us as reasonable, despite our concern that the overall investigation is politically motivated. But we stress and warn that the remedy for deception by a social media platform is not to regulate that platform’s content moderation policies. The remedy is for the platform to cease the deceptive conduct by following its stated policies, whatever they may be.

Politicization of the FTC

When a new chair takes the helm at the FTC, that person naturally signals their priorities through speeches and enforcement actions. Given all of the issues facing the FTC, one has to ask why Ferguson is focusing on what he calls “technology platform censorship.”

Ferguson’s approach is part of President Donald Trump’s sustained attack on what he sees as the censorship of conservative voices by the major social media platforms. After Trump was elected in November 2024, Ferguson went out of his way to criticize the former management of X, then known as Twitter, and praise its new owner, Elon Musk, writing: “X was once as censorious as the rest. Its current turn toward free expression is due only to its new owner’s unusually firm commitment to free and open debate.” This is ironic given the evidence indicating that Musk has shadow banned people who have criticized him on X.

The history and context here are telling. In May 2020, just after Twitter began putting fact-checking notices of some of his tweets, Trump issued an Executive Order on Preventing Online Censorship. That EO stated: “Twitter, Facebook, Instagram, and YouTube wield immense, if not unprecedented, power to shape the interpretation of public events; to censor, delete, or disappear information; and to control what people see or do not see.” The partisan basis for that EO was explicit: “Twitter now selectively decides to place a warning label on certain tweets in a manner that clearly reflects political bias. As has been reported, Twitter seems never to have placed such a label on another politician’s tweet.”

That same EO stated in Section 4(a): “It is the policy of the United States that large online platforms, such as Twitter and Facebook, as the critical means of promoting the free flow of speech and ideas today, should not restrict protected speech.” Of course, under the First Amendment, the government cannot dictate the content moderation policies of private parties.

Trump’s 2020 Executive Order then called on the FTC to “consider whether complaints allege violations of law that implicate the policies set forth in section 4(a) of this order. The FTC shall consider developing a report describing such complaints and making the report publicly available, consistent with applicable law.”

The FTC chair in 2020, Joe Simons, a Trump appointee, did not launch such investigations into the social media platforms. In Senate testimony, he explained why not: “Our authority focuses on commercial speech, not political content curation. If we see complaints that are not within our jurisdiction, then we don’t do anything.” After Simons gave that testimony, Trump reportedly summoned him to the White House to pressure him further, but Simons did not succumb to that pressure. What has changed from 2020 to 2025 is that the FTC is no longer acting as an independent agency and the chair is now willing, and seemingly eager, to wield the FTC’s powers for partisan political purposes.

We fear that this further politicization of the FTC will undermine the FTC’s ability to perform its statutory mission and may ultimately lead to the demise of the agency.

The threat to free speech

As always when analyzing the economic and social effects of new technologies, it is instructive and important to find historical parallels. Since the founding of our country, newspapers and magazines have competed based on their content moderation policies. Fortunately, the First Amendment has prevented the government from controlling those policies.

We very much agree with Ferguson that, notwithstanding the First Amendment, the federal government has many powerful tools to pressure private parties so they will favor content welcomed by those currently in power and disfavor content that is critical of those currently in power. In the Posner interview, Ferguson describes the “threat that a government can always potentially inflict on any marketplace participant, which is, ‘We can make your life difficult.’ The regulators can show up, they can audit, they can investigate, they can cost you a lot of money, and the path of least resistance is: ‘Do what we say.’”

We also vehemently agree with Ferguson when he says in that interview: “I do generally think the government should not threaten private people with punishment because of things they are saying unless they are criminal.” Surely, this means that the federal government should not be pressuring social media platforms to change or abandon their lawful content moderation policies. Notably, Ferguson criticized the Biden administration for allegedly pressuring the large social media platforms, including Facebook, Twitter, and YouTube, to censor content “on a host of divisive topics like the COVID-19 lab-leak theory, pandemic lockdowns, vaccine side-effects, election fraud, and the Hunter Biden laptop story.” Ferguson explicitly opposed government pressure on private parties to modify their content moderation policies.

Yet, Ferguson is now doing the very thing he condemns: he is using the FTC to apply pressure on social media platforms to change their content moderation policies to please Trump and his political allies. Ferguson offers a thin veneer for his attack on free speech—imagined collusion—but there is no basis for inferring collusion when one sees several social media platforms adopt similar policies that reflect each platform’s own unilateral interests.

The FTC should return to its mission of protecting competition and consumers. That mission does not include joining the president’s partisan vendetta by attacking free speech.

Authors’ Disclosures:

Aaron Edlin is the Richard Jennings Professor at the University of California Berkeley, where he is in both the economics department and the law school. He has worked for Google and bepress, both of which have social media platforms with content moderation policies.

Carl Shapiro is the Transamerica Professor Emeritus at the University of California at Berkeley and a Senior Consultant at Charles River Associates. He has given advice in recent years to Apple and Google, but not about their content moderation policies. A list of his antitrust testimony and his disclosure of entities providing substantial financial support is here.

You can read our disclosure policy here.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.