Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Reason
Reason
Politics
Corbin Barthold

Section 230 Heads to the Supreme Court

In the summer of 1995, then-Rep. Chris Cox (R–Calif.) read an article that left him unsettled. A New York trial court had ruled that Prodigy, one of the early online service providers, could be held liable for allegedly defamatory statements posted on one of its bulletin boards. What made the decision "surpassingly stupid," as Cox would later describe it, was that Prodigy had incurred this legal exposure by doing what we now call content moderation. Prodigy was being penalized for trying to provide a family-friendly product. To eliminate this "moderator's dilemma"—a legal regime in which an online forum that moderates some content becomes legally responsible for all the content it hosts—Cox and then-Rep. Ron Wyden (D–Ore.) introduced the bill that eventually became what's now known as Section 230.

With limited exceptions, Section 230 protects platforms—from large websites and apps to individual blogs and social media accounts—from liability for disseminating speech created by others. That rule allowed the internet to flourish. By pinning culpability for illegal material squarely on the person who created it, Section 230 let internet services grow and profit by offering spaces for user-generated content. By filling the internet with different speech environments, the services' innovation enabled a wide array of people to find places online where they feel comfortable speaking. Section 230 was a boon for free speech, for the internet, and for free speech on the internet.

The Supreme Court has never heard a Section 230 case—until now. Earlier this month, the justices agreed to review Gonzalez v. Google, in which the plaintiffs argue that YouTube's "targeted recommendation" of videos falls outside the Section 230 shield.

All the major platforms do "targeted recommending" of some sort or another: On the vast modern internet, curation of information is an essential service. When they examine Section 230 next year, the justices could end the internet as we know it.

Why Section 230?

A few years before the Prodigy decision, another early online service provider, CompuServe, had defeated a similar suit. The court there had concluded that CompuServe could not be held responsible for its users' speech because it was simply a distributor of others' material. The key point was that CompuServe did not know, and made no effort to discover, what was said in its forums.

Prodigy, by contrast, had monitored its service in an effort to find and remove content that ran against "the culture of the millions of American families" that it "aspire[d] to serve." A distributor, such as a bookstore or library, faces liability for others' speech only if it knew or should have known of the speech's illegality. A publisher selling books or newspapers faces much stricter liability for the speech it hosts. By making a "choice" to "gain the benefits of editorial control," the New York court declared, Prodigy had transformed itself from a distributor into a publisher, thereby exposing itself to "greater liability than CompuServe and other computer networks that make no such choice."

Cox and Wyden wanted to protect services in Prodigy's position from publisher liability. Section 230's pivotal provision states: "No provider or user of an interactive computer service"—a term defined to include online platforms of all stripes—"shall be treated as the publisher or speaker of any information provided by another information content provider." Usually, on the internet, only the initial speaker of a statement can be held liable for what the statement says.

Originally called the Internet Freedom and Family Empowerment Act, Cox and Wyden's bill was hitched, in the commotion of the legislative process, to a very different Senate bill, the Communications Decency Act, which sought in essence to ban pornography from the internet. Both bills were then passed as part of the Telecommunications Act of 1996. A year later the Supreme Court struck down the Senate's anti-porn regulation as a violation of the First Amendment. Cox's and Wyden's deregulatory measure survived this divorce unscathed. It even kept the name of its unconstitutional former spouse—to this day it is misleadingly known as Section 230 of the Communications Decency Act.

A few months after the Supreme Court invalidated the "true" Communications Decency Act, Judge J. Harvie Wilkinson, an accomplished judge on the U.S. Court of Appeals for the Fourth Circuit, issued the first major decision on Section 230. Under Section 230, says Zeran v. America Online (1997), "lawsuits seeking to hold [an online] service provider liable for its exercise of a publisher's traditional editorial functions—such as deciding whether to publish, withdraw, postpone, or alter content—are barred."

As Wilkinson saw it, Section 230 cannot protect the function of publishing without also protecting the function of distributing, because the greater protection includes the lesser. To see why, suppose that someone objects to a piece of content hosted by a platform. The platform then knows about the content and, knowing about it, must decide what to do with it. It is put to the "choice" of "editorial control." In working out whether to leave the content up, downrank it, label it, or take it down, the platform approaches the content as a publisher would, and in so doing enjoys the protection of Section 230.

Zeran proved enormously influential. Following its lead, courts have held that Section 230 protects platforms from liability for chatroom remarks, social media posts, forwarded emails, dating profiles, product and employer reviews, business location listings, and more. Thanks to Section 230, the internet's most popular destinations—Google, Facebook, YouTube, Reddit, Wikipedia—are filled with user-generated content.

Fair and Unfair Criticism

Courts have set some limits around Section 230. One prominent decision let the plaintiffs sue a website for expressly inviting users to supply input that violated fair-housing laws. By asking specific questions about legally protected categories such as sex and sexual orientation, the court held, the website had become the "provider," at least in part, of the resulting content. Another ruling opened a platform to liability for offering a "speed filter" app that supposedly encouraged reckless driving. The basis of the suit was not the high speeds the plaintiffs' children had posted on the app before dying in a car wreck, but rather the app's allegedly negligent product design. 

Yet in lawsuits that seek to hold a platform liable for the substance of third-party speech, Section 230 has held firm. Countless victims of online abuse and harassment have found themselves out of luck.

Consider the case of Kenneth Zeran. On April 25, 1995, six days after the Oklahoma City bombing, he started receiving threatening phone calls, some of which included death threats. Someone, it turned out, had posted on an AOL bulletin board an ad offering "Naughty Oklahoma T-Shirts" ("Visit Oklahoma," said one; "it's a blast") and bearing Zeran's home phone number. Further posts appeared (bumper stickers and keychains were thrown in the mix), and the calls increased to hundreds a day. Zeran could not determine who had created the posts; because of Section 230, he had no recourse against AOL for failing to take his plight seriously.

Does Section 230 enable this sort of misbehavior? Is there more racism and misogyny online because of it? That's what some on the left think. President Joe Biden recently accused platforms of "spreading hate and fueling violence," and he vowed to "get rid" of their "immunity."

Whether social media generates radicalism is a fraught empirical question. Regardless, there is little reason to expect that removing Section 230 would improve matters. Recall the moderator's dilemma: Before Section 230, a platform could limit its liability either by doing virtually no content moderation or by doing lots of it. Remove Section 230 and the dilemma returns. Some services would reach for distributor status, while others would embrace publisher status. On the "distributor" platforms, hate speech and misinformation would flourish like never before. The "publisher" platforms, meanwhile, would remove content the moment anyone asserts that it's defamatory. In practice, that means these services would expel people who dare to accuse the powerful of discrimination, corruption, or incompetence.

And let's not forget the Constitution. Most of the speech that Biden thinks is "killing people" is legal and protected under the First Amendment. Without Section 230, platforms might remove more "lawful but awful" speech in order to avoid a flurry of lawsuits. Then again, they might not, for the lawsuits complaining about "lawful but awful" speech are not the ones that would succeed. Such suits would advance further in litigation (probably to discovery and then summary judgment) than they would with Section 230 around (which causes them to get dismissed at the pleading stage), but only to fail in the end. Erasing Section 230 is therefore likely to entrench large platforms (which can endure the cost of seeing lots of doomed lawsuits to the finish) at the expense of new entrants (which can't).

Many on the right want to scrap Section 230 too. Often their strategy is to claim that the law says things it does not say. Sen. Ted Cruz (R–Texas) was an early proponent of the "platform versus publisher" myth—the notion that "platforms" must exhibit viewpoint neutrality (as measured…somehow?) or else be deemed "publishers" lacking Section 230 immunity. The law contains no such distinction. Indeed, as we have seen, Section 230 encourages publisher-like behavior.

Speaking only for himself, in a separate opinion issued two years ago, Justice Clarence Thomas argued that courts have "relied on policy and purpose arguments" to give Section 230 too broad a scope. He went on to propose that Section 230 does not shield platforms from distributor liability. It is true that in considering Section 230, courts often invoke policy and purpose. But the text of the law is broad: An immunity from being treated as "the publisher" of third-party content is a big immunity indeed. Moreover, Zeran got it right: Being protected as a publisher logically encompasses being protected as a distributor. Otherwise Section 230 would be nothing more than a notice-and-takedown scheme—something that would have left companies like Prodigy almost no better off than before.

If there were any doubt that Section 230 is not so narrow, we could dispel it with a look at policy and purpose. In this instance the statutory policy and purpose were passed as part of the law. Section 230 itself states that it aims "to promote the continued development of the Internet" and "to preserve the vibrant and competitive free market" that exists there.

Taking Section 230 for Granted

ISIS killed Nohemi Gonzalez in the November 2015 Paris attacks. There is no direct link between YouTube and Gonzalez's death, no evidence that YouTube was used to plan the attacks or recruit the attackers. Nonetheless, Gonzalez's family sued YouTube's owner, Google, claiming that YouTube had hosted ISIS recruitment videos around the time of the attacks. The trial court applied Section 230 and dismissed the suit. The U.S. Court of Appeals for the Ninth Circuit affirmed. Now the case is headed to the Supreme Court.

The plaintiffs contend that their lawsuit is not about the terrorist videos themselves, but about YouTube recommending those videos to users. But recommending content to users is classic publisher behavior. It's what a newspaper does when it puts a story on page A1 instead of page D6. To hold a platform liable for how it presents user-generated content is to treat it as a publisher—exactly what Section 230 forbids. If ISIS had not uploaded videos to YouTube, YouTube would have had no terrorist content to serve up. This is a tell that the plaintiffs' suit is really about the user-generated content and is a loser under Section 230, notwithstanding the tragedy of Gonzalez's death. That YouTube neither solicited nor intended to recommend terrorist content specifically should clinch the matter.

The Supreme Court typically takes no interest in statutory interpretation unless the courts of appeals disagree about the meaning of a statute; only then do the justices step in to resolve the dispute. When it comes to the "targeted recommendation" theory, two circuits, the Ninth and the Second, have held that Section 230 governs as normal, and none has disagreed. The Court has granted review in Gonzalez in the absence of a circuit split—an ominous sign that some of the justices have an itch to do mischief.

Those justices might wish to argue, as some dissenting judges in the courts of appeals have, that the exceptional "targeted-ness" of algorithmic recommendations makes them special. But targeting content at people is what publishers do. It's unclear why a platform should lose its Section 230 protection for targeting well. (Remember that Section 230 aspires "to promote the continued development of the Internet.") And if well-targeted information produces legal exposure, how is a YouTube video recommendation to be distinguished from a Google search result? Are the justices really so hostile to "Big Tech" that they'd be willing to run the risk of trashing search engines?

Section 230 has many detractors who insist that the internet would be better off without it. The critics are strangely self-assured about this. Their confidence is especially puzzling when one considers that some of them detest Section 230 for enabling speech (namely, hate speech and misinformation) while others loathe it for enabling "censorship" (that is, content moderation). The fact that so many believe so strongly that killing Section 230 will serve utterly disparate ends should give any serious person pause.

We've apparently lost track of how far we've come. The media used to talk at us. The internet gave us more power to talk with one another—to speak up, to collaborate, to protest, to be heard. Far too few people seem sufficiently worried that curtailing Section 230's support for user-generated content might destroy what makes the internet great.

Section 230 is almost as old as the commercial internet it safeguards. But it has worn pretty well. It has a proud past, and it remains important in the present. If the Supreme Court bleaches its future, that will be a shame.

The post Section 230 Heads to the Supreme Court appeared first on Reason.com.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.