Illustrations of speech bubbles coming out of a smartphone

Twenty-five years ago, Congress passed a little-noticed law that shielded online platforms from liability for the content posted by users.

In the decades since, Section 230 of the Communications Decency Act, signed into law by President Bill Clinton on Feb. 8, 1996, has paved the way for the internet as we know it.

For the better: by enabling everything from unfiltered opinion in the comments sections of news sites to the phenomenon of social media, as well as giving platforms the option to moderate that online content. 

More on the Internet and the Law

And for the worse: by facilitating the mass distribution of disinformation, hate speech, and other objectionable content. 

“It affects every aspect of the internet from online safety to online shopping,” says Laurel Lehman, policy analyst for Consumer Reports.

And now, as it celebrates its silver anniversary, Section 230 finds itself under attack from across the political spectrum, including legislators and others ready to revise the law and with it the digital lives of millions of U.S. consumers. 

Here’s what you need to know about this important provision and its uncertain future.

What Is Section 230?

At the heart of Section 230, you’ll find 26 simple words. “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 

At the time it was drafted, the law effectively shielded services such as AOL, Prodigy, and CompuServe from liability for comments posted by members on their message boards. That protection led to the kind of open exchange of information and opinion on those forums and on Facebook, Twitter, YouTube, and other online platforms today.

It also makes it possible for e-commerce sites such as Amazon and Yelp to host customer reviews without fear of reprisal from disgruntled manufacturers.

And, as Lehman says, it protects individual citizens as well. Without the provision, you could be sued for inadvertently forwarding an e-mail with specious claims or for moderating (or not moderating) the discussion in a Facebook group.

Essentially, Section 230 treats online platforms less like a newspaper, which can be sued for libel if it prints something that’s harmful and untrue, and more like a neighborhood newsstand or bookstore, which is free to sell a wide range of publications without vetting every last word.

It allows Facebook to safely share comments, likes, and photos from 1.82 billion people a day without having to eyeball each and every one of them.

Why Is Section 230 Controversial?

Because online platforms need to preserve community standards, Section 230 allows them to moderate content, removing, suppressing, or flagging posts considered to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” without exposing themselves to additional liability risks. 

No one wants platforms where individuals can peddle child pornography or engage in sex trafficking, for example.

And so the law leaves room for Facebook, Twitter, and YouTube to safeguard the standards accepted by their users, without holding the platform accountable for that moderation in the same way a news source like “60 Minutes” or the New York Times would be held accountable.

The problem: Where do you draw the line on that moderation?  

Section 230’s provisions were designed for a time when the internet platforms were smaller and had a far less influential presence in our daily lives. Its framers could not have foreseen the advance of sophisticated misinformation campaigns (at times launched by foreign foes), revenge porn, and political leaders who communicate to their constituents using social media.  

“One of the premises of 230 was that there would be vibrant competition all over the internet,” says John Bergmayer, legal director of Public Knowledge, an advocacy group based in Washington, D.C. “Section 230 was designed to let different platforms have different approaches about what content they choose to host, how they moderate that content, and what they take down. But we’ve lost a lot of that due to the rise of these major platforms.”

Now that Facebook, Google, and other tech giants have the power to sway not just the economy but also perhaps even election outcomes, we’ve reached a point where both Sen. Ted Cruz, R-Texas, and House Speaker Nancy Pelosi, D-Calif., use similar language to express concerns over the way Section 230 protects those companies. But that’s where the common ground ends.

Cruz, for example, has argued that platforms moderate too aggressively, while Pelosi and her allies take the opposite tack, arguing that platforms protected by Section 230 don’t go far enough to moderate posts filled with conspiracy theories and politically charged misinformation.

As policy experts note, platforms exist in a digital ecosystem that makes it hard to do the right thing. You’re damned if you moderate content and damned if you don’t.

While Section 230 facilitates that moderation, it doesn’t actually encourage it.

“Harmful content is often crafted in a way that makes it popular with readers and thus profitable,” says CR’s Lehman. “And platforms currently don’t have the incentive to prioritize consumer well-being over the bottom line.” 

Which is why Section 230 currently seems to be under attack from all sides. 

How Can Section 230 Be Improved?

At the moment, no fewer than 23 bills that would amend Section 230 have been introduced in Congress, and yet more wait in the wings.

While some are bipartisan, they offer little broad consensus beyond the general feeling that Big Tech platforms currently get too much protection from Section 230. 

(To learn more about the various proposals—and get analysis on key concerns from CR’s advocates—read this post from policy analyst Laurel Lehman.)

The proposed amendments fall into three broad categories.

The first, which includes the PACT Act introduced last June by Sens. Brian Schatz, D-Hawaii, and John Thune, R-S.D., would reduce the scope of the protections offered to platforms by the law or require platforms to change their behavior to keep those protections. By exposing the companies to more litigation, the thinking goes, you encourage them to protect consumers from potentially harmful or discriminatory content. The challenge here is to do so while striking a balance that doesn’t encourage overmoderation of marginalized communities.

The second approach, which includes the Online Freedom and Viewpoint Diversity Act, proposed by a group of senators led by Roger Wicker, R-Miss., would restrict moderation and fact checking to promote a freer flow of ideas. 

“It’s really hard to see where the compromise is going to come from when their operating assumptions about what’s wrong with the platforms are directly opposite each other,” says Bergmayer at Public Knowledge. “There aren’t compatible policy goals.”

A third group of proposals, which includes a bill proposed by Sen. Lindsey Graham, R-S.C., would essentially eviscerate Section 230. Those proposals seem to be crafted to get Big Tech’s attention more than to actually advocate a return to a digital Wild West. But they also highlight the way Section 230, despite its flaws, helps to bring some order to the online world.

“Section 230 made the internet what it is today—for better and for worse,” says CR’s Lehman. “The recent scrutiny highlights both the wonders and the failures of the internet information ecosystem that Section 230 made possible. The challenge facing policymakers in 2021 is striking the right balance to ensure that the law makes life online better, not worse, for the next 25 years.”