Will increasing the liability of internet platforms mitigate disinformation? Economists weighed in on the effects of limiting or repealing protections for Big Tech through a recent survey from the Forum for the Kent A. Clark Center for Global Markets—previously the Initiative on Global Markets—at the University of Chicago Booth School of Business.

The U.S. Supreme Court recently heard arguments for two cases regarding Section 230 of the 1996 Communications Decency Act, which grants tech companies protection from being held liable for users’ content. At the same time, Section 230 allows internet publishers the discretion to remove content that is objectionable across various dimensions. The cases revolve around whether tech companies should be held responsible for deadly terrorist attacks due to the recruitment of terrorist group members and diffusion of their message on social media platforms. 

Liberal and conservative politicians alike have called for limiting or dismantling Section 230’s protections, stemming from concerns about issues such as disinformation— the deliberate sharing of false information, often with the intention of swaying public opinion—and bias against political speech. Meanwhile, this immunity to liability from user-generated content has played a part in the growth of social media platforms like Facebook, Google, and Twitter. 

Over half of the economists from the Clark Center survey agree or strongly agree that imposing stronger legal liability on online platforms would reduce disinformation. 

Daron Acemoglu of MIT was in the majority of economists who agreed, saying “These platforms are playing the roles previously performed by newspapers, but without editorial responsibility. Legal liability would push them towards responsible publishing, especially toward less algorithmic boosting of the most questionable content to maximize user engagement.”

Robert Shimer, from the University of Chicago, also agreed, while adding the caveat that “it will also reduce the amount of controversial but correct information through overzealous moderation.”

“Depends on what the platforms are liable for. Can’t ask them to fact-check all posts, so mis-information would still abound,” said Richard Schmalensee, from MIT, who was uncertain about the effects of stronger legal liability. 

Kenneth Judd of Stanford was also uncertain, stating that “The major sources of misinformation will find ways around the new rules.”

Although the majority of economists surveyed agree that imposing stronger legal liability on online platforms would reduce disinformation, they are less certain about the effects of such reforms on platforms’ advertising business, upon which these tech companies rely heavily. The economists surveyed are unclear about the effects of stronger legal liabilities on online platforms’ advertising revenue with 44 percent reporting uncertainty, while only 22 percent agree that such reforms would cause substantial damage to their bottom line. 

Aaron Edlin of the University of California, Berkeley was uncertain, saying, “it depends upon how much content is eliminated and the positive or negative value of advertising next to that content. Consider that Musk reportedly lost, rather than won, boatloads of advertising revenue (in part) by loosening content restraints.”

“If I have to guess I would say it will lower it, but there are offsetting forces that will shape the new equilibrium and hence ad prices and revenue,” said Anil Kashyap, co-director of the Clark Center. 

Acemoglu agreed that the effects on tech companies’ advertising business would be substantial, saying, “Leading platforms’ business model is currently based on maximizing user engagement, often via emotional triggers and outrage. Removing some of the questionable user content would reduce this type of engagement and digital advertising revenue.”

Articles represent the opinions of their writers, not necessarily those of the University of Chicago, the Booth School of Business, or its faculty.