Home News Book Reviews Can the Public Moderate Social Media?

Can the Public Moderate Social Media?

0
Copyright Cambridge University Press

ProMarket student editor Surya Gowda reviews the arguments made by Paul Gowder in his new book, The Networked Leviathan: For Democratic Platforms.


It is broadly recognized among legal and communications scholars that internet platform companies exercise quasi-governmental authority in regulating their users’ behavior. In the United States, social media content moderation, in particular, is often compared to the government’s constitutional responsibility to protect and, in some cases, regulate free speech. Those who claim social media companies’ existing standards for online speech moderation are too draconian typically appeal to Americans’ First Amendment right to free speech as a defense against such interventions.

In his new book, The Networked Leviathan: For Democratic Platforms, Northwestern Law professor Paul Gowder argues that platform companies’ resemblance to our government should be taken a step further. He proposes that platforms like Twitter and Facebook—the latter of which he has worked for in the past as something of an “in-house democratic theorist,” to use his words—be governed as democracies.

Gowder claims that allowing ordinary people to have a say in platforms’ rule enforcement, rule development, and ultimately even product design would help the companies better understand and more efficiently advance the public interest. Events like the social media-induced ethnic cleansing of the Rohingya people in Myanmar or January 6 attack on the U.S. Capitol, he says, would have never occurred (or, at least, would have been much less likely to occur) had the publics of Myanmar and the U.S. participated in content moderation. In a platform democracy, companies would have to respond to the concerns of their products’ users when those users perceive a serious issue or impending crisis based on circulating content. User feedback would compel them to prevent the provocative content or misinformation from evolving into off-platform harms. 

The idea of private companies with governments of, by, and for the people might seem unrealistic. After all, The Networked Leviathan’s very title is a reference to the seventeenth-century English philosopher Thomas Hobbes’ masterwork, Leviathan. Writing during the midst of the English Civil War, Hobbes saw a population deeply divided on matters of religion and argued that only rule by an absolute sovereign (or Leviathan, his metaphor for the all-powerful state) could ensure the civil peace and security he believed was in the public interest. For Hobbes, the people needed to be controlled, not trusted. However, Gowder’s more optimistic stance toward the political nature of people and preference for democratization over Hobbesian monarchy is not without empirical merit. As Gowder points out, Wikipedia’s editorial scheme based on open collaboration and Reddit’s novel methods of disaggregated content moderation show the desire for and effectiveness of involving regular users in some aspects of platform governance. Twitter’s crowdsourced fact-checking program, Birdwatch, has also shown signs of success since its launch in late 2022.

The Networked Leviathan provides strong justification for democratic governance of platforms—at least when it comes to executing tasks whose successful completion is in the interest of both companies and the public—that’s grounded in a century’s worth of political theory. Various prominent scholars in the social sciences, such as Elinor Ostrom, have illustrated the benefits of multilevel, participatory governance structures over centralized, authoritative ones, according to Gowder. Applying these insights to platform governance would allow companies to better manage the complexity of their userbases and products and, ultimately, maintain healthy platform ecosystems free from misinformation, scams, incitement, and hate.

Gowder offers two main reasons why platforms governed in a top-down manner are often unable and even unwilling to regulate user behavior effectively. The first is that platforms’ global scale creates what he calls a knowledge gap. Technology companies based in Menlo Park or Seattle often lack the cultural and political capacity to understand what global issues they need to monitor and manage, especially those in relatively smaller markets. In Myanmar, for example, Facebook hadn’t built the capacity to effectively monitor Burmese-language content and, as a result, failed to take down propaganda in 2016 and 2017 that led to the genocide of the Rohingya minority. 

The second reason centralized platform governance structures often fail to regulate user behavior is that the leaders of tech companies do not reliably pursue long-term interests over short-term gains. Companies have a short-term interest in making a quick profit and a long-term interest—which they share with the public—in maintaining a healthy platform environment. While companies may, for example, immediately benefit from engaging social media users with political misinformation, they tend to lose out in the long term if the presence of politically toxic content causes their products to become unpalatable to mass-market consumers or results in advertiser boycotts. Nevertheless, firms may prioritize short-term returns due to technical difficulties, political pressure, or diverging incentives between top-level leadership and lower-level employees. Facebook executives during the Trump presidency, for instance, abstained from rigorously applying their company’s misinformation policies to Breitbart News because they felt it wasn’t in their short-term interest to “start a fight with Steve Bannon,” the publication’s executive chairman who served as Trump’s chief strategist during the first few months of his term.

But if those who advocate keeping platforms’ top-down governance structures vastly overestimate the degree to which technology companies as they currently function can limit their inadvertent supply of harmful content, Gowder might underestimate the degree to which the public demands it. Put differently, it is oftentimes the public, not companies, that is the problem.

Gowder writes that when it comes to platforms’ failures to prevent the spread of misinformation, scams, and other harmful content, “nobody (except the perpetrators) wants this result.” However, recent polls show that around a third of Americans believe in false claims about the coronavirus vaccine. A similar portion believes President Joe Biden only won the 2020 presidential election due to voter fraud. These figures may not represent majorities of the American voting public, but they certainly suggest that it is much more than “nobody” promoting harmful content. Just about every American may prefer that platforms eliminate harmful content as a theoretical category of  detrimentally antisocial behavior, but substantial portions of the public actually agree with specific claims that Gowder would deem contrary to the public interest.

Gowder does recognize that the public, and not just companies and hostile state actors, can generate social harms through its participation in platform governance. Taking India as an example, he acknowledges that including ordinary people in platform-rule enforcement and development means empowering a plurality of the voting public who favor Hindu nationalism rather than the liberal democratic politics he favors. Gowder states that the answer to this dilemma may just be that “however poor the record of governance-from-below in making decisions rooted in error and malice, the record of governance-from-above is immeasurably worse.” 

Interestingly, although Gowder argues for democratizing social media platforms, he does not believe that competition among social media platforms, which some scholars argue would incentivize them to compete to answer the public’s demands, would actually solve content moderation issues. He argues that bigger, more established companies have sophisticated teams of policy personnel, large datasets with which to train machine learning models with, and other resources that help them carry out content moderation more effectively than smaller companies. They also have strong incentives to keep their products compatible with mass consumption markets by, for instance, not directing their users to or placing their advertisers’ logos beside neo-Nazi content. Smaller platform companies that cater to a niche market, like Parler and Gab, on the other hand, don’t have such resources or incentives; they stay afloat precisely by allowing their users to view and circulate misinformation. Using antitrust law to “break up Big Tech,” therefore, might actually exacerbate current issues surrounding social media content moderation by creating conditions in which some firms are incentivized to provide harmful content to meet the demands of specific segments of the market. In this way, Gowder tilts back toward a more Hobbesian conceptualization of Leviathan. 

Still, Gowder’s digital regime is one in which Leviathan must share rather than hoard its power. In proposing democratic governance of platforms, Gowder looks to the public in hopes of finding a citizenry eager to assist companies in advancing a public interest defined as the pursuit of liberal democratic goals. He acknowledges his own optimism in this argument. Perhaps it is too idealistic. Can a democratically governed “networked Leviathan” ensure the existence of healthy internet platforms free as much as possible from misinformation, incitement of violence, scams, and hate? Or does the only chance for a safer and less divisive internet landscape lie with strengthening social media platforms’ current authoritarian governance structures and solidifying Leviathan as Hobbes imagined?

Articles represent the opinions of their writers, not necessarily those of ProMarket, the University of Chicago, the Booth School of Business, or its faculty.

Exit mobile version