Caio Mario S. Pereira Neto reflects on the discussions at the Stigler Center’s 2025 Antitrust and Competition Conference and addresses the problems that confront Brazil’s courts as they navigate the tradeoffs between removing disinformation that threatens electoral integrity and observing constitutional protections for freedom of expression.
At the Stigler Center’s 2025 Antitrust and Competition Conference on “economic concentration and the marketplace of ideas,” Tim Wu presented a vivid depiction of some of the challenges society and regulators face in our current information environment. In particular, he explored how state and non-state actors are using speech itself as a form of information control through the harassment of speakers, troll attacks, and flooding the public sphere with disinformation. These actors deploy these strategies to silence or reduce the reach of other voices (a point Wu detailed in a thoughtful piece published here).
As Wu pointed out in his panel, “Monopolies, Democracies, and the Information Ecosystem,” one confounding feature of these strategies is that they are usually protected by an anti-censorship principle, such as the right to free speech. Because this guarantee protects speech agnostically, it shields from courts and regulations speech that is generated to suffocate other speech. Thus, illiberal actors can rely on anti-censorship principles as part of their broader strategy to control the public sphere by flooding it with misinformation and harassment.
In a separate panel at the conference dealing with social media and democracy beyond the United States and European Union, I discussed Brazil’s experience with disinformation in elections. Brazil provides an interesting illustration of how anti-censorship principles and illiberal flooding strategies can reinforce each other to threaten electoral integrity. This article reflects on the comments of the other conference participants to expand my panel discussion.
As I pointed out during the conference, the Brazilian electoral courts, especially the Regional Electoral Tribunals (TREs) and the Superior Electoral Court (TSE), have been dealing with disinformation in a case-by-case, content-based approach (a brief description of the Brazilian system can be found here). Since the 2018 elections, courts have had the authority to order digital platforms to take down content purporting to be factual but which is “known to be untrue” during election cycles. In the 2022 elections, the TSE included a second legal standard to allow the takedown of content containing “severely decontextualized” facts.
To cite one example of this court-enforced moderation in 2022, the TSE ordered the takedown of content alleging that the QR code present in the new federal electoral ID (título eleitoral) would lead to automatic voting for the candidate Luís Início Lula da Silva, insinuating that electoral fraud was taking place. The TSE asserted that the QR codes in the new electoral ID only contained the individual’s personal and voting registry data, which were meant only to authenticate voters’ ID, and had no relation whatsoever with any candidate. The content was thus considered “known to be untrue,” and the court ordered its removal from all platforms.
As expected, this system has led to intense litigation, with expedited procedures generating hundreds of decisions ordering platforms to take content down categorized as disinformation. FGV-CEPI, a research center within FGV Law School in São Paulo, developed a database with 1,492 cases addressing online disinformation for just the 2018 elections. These cases comprised a total of 2,850 decisions (including interim measures and appeals).
In the Brazilian legal system, the guarantee of freedom of expression does not constitute an absolute shield for flooding strategies. Courts must balance this guarantee with other constitutional principles, such as the fairness of elections, and they may take measures against what they consider disinformation. Yet, the courts’ interventions have elicited anti-censorship concerns from the public.
This reaction to the courts policing election disinformation has been particularly strong among right-wing parties and their supporters, as illustrated by an FGV study about reactions to the Supreme Court decision that banned X in Brazil for a few weeks in 2024 (outside the electoral period). They criticize the courts, especially the TSE, for violating rights to freedom of expression by taking down content and allegedly showing unfair political bias. Although there are no systematic studies showing whether court decisions are being applied disproportionally to right-wing or left-wing posts (clearly a point that deserves further research), reactions against the take-down orders have been notoriously more intense and frequent from right-wing groups.
The current balance between the courts’ considerations of freedom of expression and electoral integrity is delicate and must not only confront a tradeoff in values but a feedback loop that generates vitriol against the courts’ interventions. Indeed, the very act of taking down content draws more attention to such content, in a variation of the Streisand Effect. The significant visibility of the takedown orders fuel the anti-censorship discourse. Expanded publicity and pushback then increases the incentives of the parties claiming to be victims of censorship to publish even more disinformation, both in quantity and in the extremity of the content. The net effect may well increase the flood of disinformation, which the courts then feel obliged to respond to, starting the loop again.
Thus, in a twist of irony, the courts’ efforts to protect election integrity and ban disinformation can ultimately generate additional harm to the information environment. The feedback loop puts the courts in a bind. On the one hand, if the courts do not strike down disinformation content, it favors the political group trying to flood the public sphere. On the other hand, if disinformation is struck down (by definition, only the most outrageous and baseless pieces will be struck down in a system that needs court attention in a case-by-case approach), it also favors the group supporting it, because it draws attention to the content and reinforces anti-censorship arguments. False positives, in which courts take down content that in hindsight should have been left up, further fuels detractors’ arguments.
This paradox illustrates how complex it can be to fight disinformation in the electoral context. The fact is that courts’ decisions dealing with alleged disinformation are themselves part of the information environment being appropriated, explored, and contested by different political groups. As such, courts cannot act as if they were positioned in an Archimedean point outside the public debate, taking a neutral position that will not generate strong reactions by those affected by it.
Despite these challenges, courts have had an important role in protecting the integrity of Brazil’s elections. To be more effective, judges should stop targeting fragmented content (e.g. individual posts or comments), while redirecting their energy to combatting the organized spread of disinformation, coordinated attacks against the integrity of elections, and the funding of sponsored content to spread fake news. Targeting these orchestrated disinformation campaigns would have a greater impact on protecting the marketplace of ideas. This may require some procedural changes and additional investigative tools.
Also, considering the difficulties associated with courts moderating content, it is worth exploring alternative strategies to deal with the flood of electoral disinformation. Wu suggested a few possibilities in his conference presentation (e.g., targeting certain conducts such as harassment, robot-aided flooding, and deliberate defamation). In addition to those, other initiatives could focus on network architecture and the rapid spread of disinformation. To name a few, governments and platforms could consider increased transparency regarding paid social media advertisements, user control over recommendation/curation algorithms, temporary limitations to forwarding information to multiple users and groups during elections, limitations on group sizes during elections, and mechanisms to detect and exclude non-human accounts and robots.
There is no silver bullet to minimize the flooding of disinformation during elections, which will certainly require multiple strategies. Understanding the dynamics of the information environment and the impacts of the current measures taken in any jurisdiction, as well as their potential unintended consequences, is a first step to designing more effective policies to cultivate a healthier public sphere. The ultimate goal must be to preserve an open and robust democratic debate, protecting freedom of expression, the legitimacy of elections, and the integrity of the institutions supporting them, including the courts themselves.
Author Disclosure: Caio Mario S. Pereira Neto is a Law Professor at FGV Law School (São Paulo) and a practitioner. His views may be influenced by his practice, which involves consulting on issues related to digital platforms, including for clients like Google, Uber and AirBnb. However, he does not practice nor provide advice on the subjects discussed in this article. He also discloses that he participated in a research project called the Disinformation Observatory in 2022 Elections, which received funding from a local NGO called Gallo da Manhã that focuses on strengthening democratic institutions. Meta has granted unconditioned research funding to the COMPPITT Research Nucleus at FGV Law School. You can read our disclosure policy here.
Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.