In response to rising concerns about political disinformation, governments have introduced a slew of interventions. Federico Vaccari warns in new research that these interventions may seem ineffective and counterproductive but can still make citizens better off overall. Regulators must be flexible and holistic in their policy designs.


In modern democracies, citizens rely on the media to provide the information they need to vote conscientiously. The vast majority of Americans obtain information about presidential elections from cable or local television, social media, and news websites. However, according to a survey from Gallup, U.S. adults perceive traditional news sources, such as cable TV and newspapers, as biased and inaccurate. These concerns are even more pronounced with social media.

There are several reasons why citizens may be skeptical of news media. First, the recent rise of disinformation and fake news has eroded public trust in media organizations. Just 7% of Americans have “a great deal” of trust and confidence in the media, according to Gallup. Second, media outlets may have their own biases and agendas, which can affect the accuracy and impartiality of the information they provide. Despite the public’s skepticism, empirical evidence shows that media usage affects voters’ behavior. For example, a 2007 paper by Stefano DellaVigna and Ethan Kaplan estimates that “Fox News convinced 3 to 28 percent of its viewers to vote Republican, depending on the audience measure.” 

The media’s ability to inform voters and shape their beliefs highlights its significant role in the political arena of democratic societies. As a result, the proliferation of disinformation has become a major concern, especially regarding the use of fake news to manipulate public opinion and compromise the validity of elections. In response, there has been a growing call for regulation, with governments and lawmakers worldwide exploring and implementing measures to counteract disinformation.

Regulating the news media presents a complex challenge compared to traditional product markets. Regulation has traditionally centered around promoting competition in the “marketplace of ideas,” with the belief that increased media competition will lead to more accurate reporting, less biased news, and better protection against capture by governments and elite groups. However, even a small media outlet can wield significant influence if it has exclusive access to a breaking story that has the potential to sway an election. In these instances, the outlet effectively becomes a monopolist of that story, and regulation of markets’ concentration levels becomes ineffective.

Recent efforts to address the issue of disinformation have centered around increasing the costs associated with its production and dissemination. This increase can be achieved through a range of measures, such as imposing higher fines as a deterrent or supporting watchdog organizations to identify and confront sources of fake news. Examples of these efforts can be seen in the United Kingdom and California, where media literacy is being emphasized in schools. In Germany, the Network Enforcement Act imposes significant fines, up to €50 million, on online platforms that fail to remove illegal content. Other countries, such as Singapore, Taiwan, Russia, have enacted strict penalties for publishing disinformation, including heavy fines and even imprisonment. These interventions all aim to curb the spread of disinformation by increasing the cost, in terms of resources, time, and effort, required to produce and deliver it.  

There are concerns, though, that anti-fake news laws may be ineffective or potentially counterproductive, exacerbating fake news instead. For example, efforts to warn readers of potentially misleading news have inadvertently backfired, resulting in some of them becoming more strongly convinced of the false information. Indeed, in my recent study, I developed a model showing that measures such as fines that raise the cost of producing and spreading false information can lead to even more disinformation and a less informed electorate. Nevertheless, the model also indicates that citizens may still benefit from these interventions. In other words, interventions which result in more disinformation are not necessarily detrimental overall. 

To see why, it is essential to recognize that media bias and the production and dissemination of disinformation have broader implications beyond distorting voters’ choices at the ballot box. Importantly, they can also shape the policymaking process. Intuitively, politicians act strategically, knowing that their actions will be filtered through the media before reaching the public. Interventions that affect how voters receive information can indirectly change the incentives of politicians, who tailor their actions to appeal to the media.

For example, lax measures allow the media to promulgate disinformation to influence voters and sway public opinion without consequence. This can incentivize politicians to prioritize policies, such as excessively generous subsidies and concessions, that benefit the media over policies that benefit the public to secure the support of influential media outlets during elections. Thus, a stricter policy that regulates what the media can say might inadvertently increase disinformation but change politicians’ incentives in a way that improves governance. A holistic analysis that considers both the informational and political effects of regulation reveals that these measures that increase the spread of disinformation can still benefit voters by improving policymaking. 

To maximize the benefits of anti-fake news regulations, independent agencies rather than politicians should carry out regulations to reduce the risk of political interference. Since anti-fake news laws can significantly impact policymaking, politicians may be tempted to use them to advance their own interests or the interests of their party, even if it is at the expense of the public. Governments may also have a vested interest in keeping the costs of producing disinformation low only to leverage the media’s influence over voters’ beliefs. Consequently, regulatory efforts may fall short if politicians oversee them. In such cases, the citizens’ welfare can be severely compromised, even if the production of disinformation is fully curtailed. 

Second, the regulation’s “size” is also an important factor to consider. Lenient regulatory interventions can be ineffective when disinformation is already easily produced and disseminated. Modest increases in the costs of producing disinformation may have no impact whatsoever, making them a waste of public resources. In such cases, regulators should implement substantial interventions or refrain from taking any action. This observation highlights the importance of carefully assessing the situation and tailoring regulatory efforts to achieve the desired outcome.

Assessing the effectiveness of interventions designed to counter disinformation can be challenging. A regulatory measure that is efficient and beneficial can be erroneously perceived as detrimental when it results in more disinformation. The counterintuitive effects of regulation make it difficult, if not impossible, to evaluate the efficacy of such interventions by analyzing the media’s reporting behavior. However, this also suggests that the concerns about ineffective or counterproductive anti-fake news laws may be overstated. Going forward, regulatory efforts to curb disinformation should not solely focus on reducing the spread of fake news but also consider its policy implications. Successfully tackling the dual challenge of evaluating and designing anti-fake news laws is a critical step in the fight against disinformation and essential for ensuring the smooth functioning of democratic societies.

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.