Abolishing Section 230 would not address disinformation and propaganda on social media nor charges of anti-conservative censorship. But its repeal would probably hurt startups and smaller rivals, further insulating big platforms from competition.  

Editor’s note: This piece was previously published by The StartUp and is reprinted here with permission from the author.

I like to think of myself as someone who’s been decently critical of excessively concentrated power in the tech platforms. Back in 2010, to general derision and laughter, I wrote that tech platform monopolies might well be a growing problem, and I like to flatter myself in suggesting I was early in calling for an antitrust campaign to break up Facebook.

But I have never really been on board with the idea that abolishing the private immunity of platforms is a good idea, or even very important for the goals that either the left or the right holds dear. It is, it seems to me, the wrong tree to bark at, a red herring, you choose the metaphor.

If you’re reading this, you probably know what I’m talking about, but just in case: we are speaking of the repeal of Section 230 of the 1996 Communications Decency Act, which grants an immunity to platforms that host the content of others. It says, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Some on the left have been calling for its repeal for some time; and President Donald Trump recently demanded that Congress repeal it.

The intuitive case for abolishing that 230 immunity goes like this. “Newspapers and TV stations are fully responsible for what they publish or broadcast. Why should Facebook / Twitter / Reddit / 4chan be getting some kind of special treatment?” After that, the left and right make different arguments, which I’ll simplify:

Left: “We have a huge problem with fascist disinformation and propaganda, and the platforms are a big part of it, because they bear no responsibility for what appears on their platforms.” 

Right: “The platforms are grossly biased against conservative speech, and they should only have immunity if they don’t censor anyone.”

With great power comes great responsibility, right? Who could be against that? The only problem is that abolishing Section 230 would address exactly none of these complaints.

“The only problem is that abolishing Section 230 would address exactly none of these complaints.”

First, no one can deny that Facebook and Twitter, not to mention 4chan, have been the breeding ground group for lots of crazy disinformation and propaganda over recent years. But, for that matter, so have Newsmax, Breitbart, ONI, the Gateway Pundit, and dozens of other sites and broadcasters that don’t have 230 immunity.

Stated differently, some liberals seem to have the fantasy that potential civil liability would finally force platforms to do more about disinformation on their sites — “to take responsibility.” But what does that mean? Because whatever the moral responsibility may be, there isn’t actually any legal repercussions for republishing, or publishing, crazy propaganda, and conspiracy theories. If so, Newsmax and Gateway Pundit, and even Fox News would not exist. The First Amendment protects your right to claim or republish a claim that Hugo Chavez threw the Georgia election.

What is (narrowly) illegal is defamation. So a main effect of abolishing Section 230 would be to create potential liability for posts in which you complain that your boss is a sexual harasser, that your doctor is an incompetent fool, or that your coach is a racist. In other words, it would lead to the preemptive takedown of malicious facts stated in public about private individuals. Is that really the promised land?

Technically, if your boss is in fact a sexual harasser and your doctor is incompetent, the accusation isn’t defamation (the truth is a defense). But that doesn’t matter, because the danger of liability would be enough: a platform like Facebook would be inclined to take down any such accusations to avoid the question. And while there is more to tort liability than defamation, the larger point is that the actual effects of Section 230 repeal would depend on the convoluted question as to the forms of private tort liability for speech exist that are not protected by the First Amendment, and what lawsuits the private bar is incentivized to bring. That’s pretty much a random walk of takedowns and lawsuits, as opposed to whatever better world you might be dreaming of.

Repeal of 230 would also probably hurt smaller platforms or startups more than the larger ones — say, a small-town newspaper’s comment section, or a startup challenger to Facebook. The big platforms have the resources to pay for all the screeners to take stuff down before they get sued, but startups don’t. A few nasty lawsuits could kill them. In this way, Section 230 repeal might even further insulate the big platforms from smaller competitors. Great, huh?

The right-wing fantasies about 230 repeal are even more off base. For one thing, without Section 230 immunity, a figure like Donald Trump would almost certainly be kicked off Twitter, because he constantly defames people. He would be way too expensive, in liability terms, to keep around. (Yes, it is odd that Trump is calling for something that would get him thrown off social media.)

“Repeal of 230 would also probably hurt smaller platforms or startups more than the larger ones — say, a small-town newspaper’s comment section, or a startup challenger to Facebook.”

As for erasing anti-conservative bias, repeal of Section 230 would have no obvious effect on that at all. The decisions to remove things like false claims of election fraud are actually the decisions of the platforms— decisions for which they already bear liability, if any. Zero effect.

As this suggests, what the left and right really care about are the content moderation policies of Facebook, Twitter, and so on. And those, as it stands, have little to do with Section 230. But content moderation, as an exercise of editorial discretion, is protected by the First Amendment. And that Congress can’t repeal.

What should the left and right be asking for? On the left, if you think there is too much conspiracy theory, insane incitement to violence, and threats out there you want stronger content moderation. And if you ultimately don’t trust the platforms to do a good job without legal threats, then what you really want is not Section 230 repeal, but a new anti-hate speech law — one that would create a new obligation on the platforms to take down incitements to violence, true threats, deliberate and intentional harassment, and perhaps even good old-fashioned hate speech. The First Amendment might even allow some of these bans.

What conservatives really seem to want, meanwhile, is something more like a version of the “fairness doctrine” adapted for social media. (Ignore the fact that conservatives used to insist that the fairness doctrine was an unconstitutional left-wing conspiracy to destroy talk radio). In other words, they want the platforms to adopt a different kind of content moderation policy, one where the platform would aim to be scrupulously fair in permitting all sides to say what they want, without being labeled “hate speech” or “misinformation.” That law might or might not be constitutional depending on how it is done. But, to repeat, abolishing 230 certainly doesn’t get you there.

Everyone is, in short, currently asking for the wrong thing. Which makes it worth asking: Why?

One reason is that this area is confusing, and the idea of making tech “responsible” does sound good. There are, as I discuss below, ways in which they should be. Also, as described below, the mere threat of 230 repeal serves its own purposes. But I think, at its most cynical, the repeal 230 campaign may just be about inflicting damage. Repealing 230 would inflict pain, through private litigation, not just on big tech, but the entire tech sector.

We don’t like you; we want you to suffer. Very 2020.

Lawyerly Caveats and Notes

  1. Okay, not everyone is asking for the wrong thing. I hasten to add that there have been some thoughtful and more subtle ideas about how to amend Section 230, by scholars like Danielle Citron and Olivier Sylvain, who want to cover the deliberate inducement of illegal conduct, or failing to run a decent content moderation program, or effectively using your users as a shield. But that’s different from abolishing the immunity.
  2. The Trump / conservative campaign against Section 230 is actually a bit more complex, and at times unconstitutional, than I’ve described. Trump and his allies know that repealing Section 230 would be painful and expensive for the tech platforms. Hence, he has tried to use the threat of removing Section 230 as a pain point, a means of gaining leverage, in exchange for an informal promise not to downgrade, label or censor the misinformation published by him or his allies. In other words, he wants to punish the sites for their labeling and content moderation practices — and that retaliation is by itself, a violation of the First Amendment. So when you see Trump or other conservatives calling for a Section 230 repeal, sometimes it is just an effort to inflict pain to try and get the platforms to do what they want.
  3. To continue on this line, it is possible that what conservatives want is not to repeal 230, but explicitly condition immunity on a “fair and balanced” content moderation — i.e., a social media fairness doctrine backed up a threat of immunity-stripping. Whether that would be Constitutional is beyond the scope of this.
  4. I’m not against all Section 230 reform. As I suggested in 1., there is a good argument presented best by Danielle Citron that Section 230 immunity has been judicially interpreted so as to go too far. Some sites, for example, use their users as a shield for what they want to do — for example, a site that clearly and knowingly facilitate illegal weapon sales might operate under the fiction its users are the ones doing the trading and it has no responsibility for any violation of the law. Section 230 wasn’t meant to, and shouldn’t be a means to, in effect, use your users to create a means to avoid the law. Hence it seems logical to deny immunity to sites that “solicit or induce illegal behavior or unlawful content,” and to amend the law to do that.