Mihir Kshirsagar argues that the evidence presented in FTC v. Meta shows that discussions about the application of First Amendment protections to social media must go beyond the binary set in Moody v. NetChoice between treating them as common carriers or editorial agents. Rather, a commercial conduct framework is needed to understand how speech operates on platforms designed to maximize user attention and ad revenue.


For years, First Amendment analysis of social media, exemplified in Moody v. NetChoice, has been stuck in a shell game of analogies. Either platforms should be seen as common carriers that faithfully act as neutral conduits like telephone companies, as Justices Samuel Alito, Clarence Thomas, and Neil Gorsuch, suggested in their concurrence. Or they should be seen as newspaper editors exercising discretion over which content to publish, as the Moody majority held. Judge James Boasberg’s recent opinion in the FTC v. Meta antitrust action cuts through the factual assumptions underpinning the constitutional debate over platform regulation and suggests the need for a new framework based on commercial conduct.

The Federal Trade Commission had charged Meta with creating and maintaining an unlawful monopoly in “personal social networking” by acquiring Instagram and WhatsApp in 2012 and 2014, respectively. The court ruled against the FTC, primarily because it found that the alleged market the agency demarcated for personal social media no longer fits the reality of today’s media environment. The opinion documents the collapse of the “social graph”: the network of human connections that once defined these products. In its place, the court explains platforms have pivoted to “unconnected content,” where algorithms feed users high-engagement video from “strangers” to extract revenue from user attention.

While researchers like Kevin Munger, Renée DiResta, and Arvind Narayanan have been documenting this shift toward algorithmically driven media for some time, Boasberg’s factual findings based on an extensive trial record is conclusive evidence of the rapid and near-complete shift to a feed-based online media.

I argue that this shift from connection to extraction should change the constitutional calculus underlying the protection of social media platforms. One clear implication is that platforms do not function as neutral conduits carrying posts between users. On the other hand, even if the category of editorial discretion is formally capacious enough to include strategies of engagement optimization, doing so would extend First Amendment protection to commercial conduct unrelated to expression that exposes users to a variety of harms. In the context of the government’s regulation of content-moderation practices, Kyle Langvardt and Alan Rozenstein explain in Beyond the Editorial Analogy why the structure of social media platforms “strains First Amendment categories.” I suggest that the government’s regulation of the platforms’ engagement optimization strategies can be accommodated within the well-understood commercial conduct doctrine under the First Amendment.

The end of the “Neutral Conduit”

The Moody minority’s common carrier argument relies on the premise that platforms are neutral infrastructure connecting willing speakers to willing listeners. Texas and Florida relied on this theory to justify laws prohibiting “viewpoint discrimination” that were at issue in the Moody case. The Moody minority concurrence embraced this theory: “If the platforms are truly acting as ‘dumb pipes’—mere conduits for the speech of others—then they cannot claim that their refusal to carry certain speech is an exercise of their own ‘editorial discretion’.”

Legally, the common carrier or “neutral conduit” designation relies on the platform functioning as a passive infrastructure that carries a user’s message to its intended destination without discrimination. The original Facebook “social graph,” a pre-existing map of connections the user created and that facilitated the sharing of information between users, arguably functioned this way. It served as a pipe for transmitting information between users who had mutually opted into a connection. And the language once used by Meta CEO Mark Zuckerberg to describe Facebook as the “digital equivalent of a town square” carries significance for the Moody minority perspective that found that “[t]he State has a legitimate interest in ensuring that these ‘modern public squares’ remain open to a diversity of views.”

Boasberg’s opinion illustrates that this transmission model or town-square ideal, to the extent it was ever realized, is effectively deprecated. The court found that Facebook users today spend only 17% of their time viewing posts from friends and family. On Instagram, it is roughly 7%. And the trajectory is accelerating:  Boasberg observed that the share of content users see by their friends fell by nearly a quarter on Facebook and over a third on Instagram in just two years. The platforms have moved away from passively transmitting information between friends. Instead, they make millions of micro-decisions per second to show users content from advertisers and professional and semiprofessional content creators.

The “Editor” vs. the “Optimizer”

If social media platforms function less like common carriers, the reflexive view, adopted by the Moody majority, suggests they default to the status of newspaper editors exercising First Amendment rights. Under this framework, the engagement algorithm is merely a tool for organizing information, much like a layout editor.

However, the Meta trial record destabilizes this analogy by exposing the mechanical reality of that discretion. Justice Amy Coney Barrett’s concurrence in Moody questioned whether all algorithmic decisions warrant strong First Amendment protection. She suggested there could be a difference between algorithms that “carry out the platform’s editorial policies” and those that do not. Specifically, she suggested that algorithms whose sole intention is to maximize profits rather than reflect the curation of content to inform users or reflect certain viewpoints may not qualify for the same, most stringent First Amendment protections. The Meta trial record provides evidence to draw that line between editorial policies and revenue optimization.

Boasberg found that the “most-used part of Meta’s apps is thus indistinguishable from the offerings on TikTok and YouTube.” Why? Because all three have converged on engagement prediction models designed to maximize a commercial metric: time-on-platform. And, as the court explains, “time spent is the best proxy for what drives these apps’ revenue: ads.” As industry participants noted, “the competition is about marginal time.” Platforms compete to monopolize user attention for sale to advertisers. This is how the industry describes itself, not a characterization imposed by researchers or regulators.

The core question is whether the engagement algorithms that are geared to predict what will keep the user engaged for longer qualify for full First Amendment protection.

Those who believe algorithms and strategies that curate content to maximize engagement and revenue will argue that newspapers also optimize for engagement with sensational headlines or other tactics. But a headline reflects a communicative intent by human decision-makers to persuade; an algorithmic feed, by contrast, operates with functional indifference to meaning. It does not “read” the content; it calculates the statistical probability of triggering a user’s behavioral response to optimize ad-delivery. The former is expression; the latter is extractive.

Indeed, the record shows that Meta’s optimization strategy is largely indifferent to the content of the speech: it cares only about the performance of the asset. Internal experiments cited in the ruling revealed that enabling Reels—short videos by content creators particularly common on Instagram—increased time spent by 48%, while boosting content from friends on Facebook by 20% left time spent “virtually unchanged.” The platforms appear to be competing on the efficiency of their attention-extraction machinery, a move likely driven by the sudden shift of attention to short-form video platform TikTok in 2020. This validates the distinction Barrett hypothesized between algorithms that are in service of the platform’s editorial policies, and those that are in the service of its revenue-maximization strategy.

Emerging empirical research corroborates this characterization. A recent study led by my colleague Manoel Horta Ribeiro using difference-in-differences methods found that the use of short-form video platforms causally increases daily mobile usage and reduces time away from devices compared to traditional social media. The platforms are measurably winning the attention extraction competition.

Notably, Boasberg’s conclusions about the competitiveness of the market does not mean it is beneficial for end-users. Competitive markets can converge on harmful equilibria when costs are externalized. Platforms compete intensely for attention while the costs of extraction are borne by users and not priced into the transaction. Here convergence on the same extractive model is evidence of a race to the bottom, rather than a well-functioning market.

Regulating the machine, not the message

If engagement optimization is not similar to a newspaper editorial decision, neither is it purely “commercial speech” in the traditional sense of an advertisement proposing a transaction. An auto-play feature, for example, is not an advertisement for the service, but it operates to keep the user engaged for longer than they want. Instead, I suggest we need a new framework that treats restrictions on engagement optimization as the regulation of commercial conduct that places incidental burdens on speech. The government is not censoring a message, but is regulating the functional machinery of the platform in a way that is unrelated to the suppression of free expression.

First Amendment jurisprudence applies different levels of judicial scrutiny depending on the nature of the speech restriction. Typically, content-based or viewpoint-based regulations face the highest scrutiny. The government must show a compelling interest and the regulation must be narrowly tailored to remedy the harm, a standard that is very demanding and is rarely satisfied. The next level is “intermediate scrutiny” that typically applies to content-neutral regulations or commercial speech restrictions. For content-neutral regulations, the government must demonstrate that the law advances a substantial interest, is not more extensive than necessary, and leaves open alternative channels for communications.

A lens that focuses on the “feed” as a product design choice rather than an expressive anthology clarifies the confusion seen in cases like the Ninth Circuit Cour’s recent decision in NetChoice v. Bonta. While the court there blocked attempts to regulate design features like “like counts” by viewing them as protected editorial choices, a commercial conduct framework would recategorize them. Viewed through the lens of the Meta findings, these features function less as expressive decisions and more as mechanisms that exploit behavioral shortcuts to maximize ad inventory.

My suggestion is to move past the binary of “conduit vs. editor.” We should treat engagement optimization as the industry treats it: a functional business practice. And this is mirrored by attention activists who have coined the term “attention fracking” to describe the business model. Just as the government regulates the safety of physical infrastructure, it can regulate the “extraction apparatus” of digital infrastructure without dictating the content flowing through it.

Treating engagement optimization as commercial conduct does not permit the government to ban specific viewpoints or mandate “fairness.” But it does open the door to regulating the business practices of optimization under intermediate scrutiny, such as transparency mandates, friction requirements, or bans on harmful “dark patterns.”

The court’s findings in Meta provide a roadmap for the “substantial state interest” in regulating potential mental health harms from engagement-driven algorithms and “narrow tailoring” evidence needed for this intermediate approach:

  • Algorithmic Transparency. If the feed is a commercial optimization tool, the state can compel disclosure of “factual and uncontroversial” information (like a “nutritional label”) without infringing on an “editorial secret.”
  • User Agency. Regulations requiring platforms to offer a stated-preference feed (based on the user’s subscriptions and follows) rather than an engagement-optimized one would not force the platform to carry speech it dislikes. It would simply require the platform to honor the consumer’s expressed intent over the platform’s commercial optimization goals.
  • Design Features. Features like infinite scroll, auto-play, or like counts are deployed to maximize ad inventory, not expression. They operate identically regardless of content and don’t express editorial judgment. Instead, they exploit behavioral psychology to maximize engagement metrics. Regulating these mechanics addresses the commercial structure, not the speech itself.

Conclusion

The constitutional questions regarding platform regulation have revolved around an outdated view of social media companies. The social graph is effectively dead, and with it, the “neutral conduit” theory. What replaced it is an algorithmic pursuit of selling ad inventory. This looks less like the “editorial discretion” of the Miami Herald and more like the yield management of a commercial exchange.

By grounding future cases in this record, courts can move past the binary of “censorship vs. freedom” and treat engagement optimization for what the facts show it is: a sophisticated commercial operation subject to constitutional, but attainable, regulation.

Author Note: The author thanks Arvind Narayanan and Manoel Horta Ribiero for helpful feedback on an early draft.

Author Disclosure: The author reports no conflicts of interest. You can read our disclosure policy here.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.

Subscribe here for ProMarket’s weekly newsletter, Special Interest, to stay up to date on ProMarket’s coverage of the political economy and other content from the Stigler Center.