Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Reason
Reason
Politics
David Post

Content Moderation, Social Media, and the Constitution

The Supreme Court now has before it three issues of profound importance for the future of Internet speech.

  • First up: How broad is the immunity, set forth in Section 230 of the Communications Decency Act, that protects Internet platforms against liability claims arising from content posted by third parties?
  • Second: To what extent does the 1st Amendment protect the content-moderation decisions made by those platforms?
  • And finally: To what extent may individual States impose controls over the content and conduct of Internet sites managed by out-of-State actors?

These are Big Questions for Internet law, and I'll have a great deal more to say about them over the next several weeks and months; consider this an introduction.

Regarding Section 230, as co-blogger Stewart Baker has already noted,  the Court has agreed to review the 9th Circuit's decision in Gonzalez v. Google.  The case arises out of the 2015 ISIS-directed murder of Nohemi Gonzalez in Paris, France. The plaintiffs seek to hold YouTube (owned by Google) secondarily liable, under the Anti-Terrorism Act (ATA)(18 U.S.C. § 2333), for damages for the murder:

"Youtube has become an essential and integral part of ISIS's program of terrorism. ISIS uses YouTube to recruit members, plan terrorist attacks, issue terrorist threats, instill fear, and intimidate civilian populations…  Google's use of computer algorithms to match and suggest content to users based upon their viewing history [amounts to] recommending ISIS videos to users and enabling users to locate other videos and accounts related to ISIS, and by doing so, Google materially assists ISIS in spreading its message."

The 9th Circuit dismissed plaintiffs' claims, relying (correctly, in my view) on the immunity set forth in Section 230 (42 U.S.C. §230(c)(1))—the "trillion dollar sentence, as I called it, or, in law prof Jeff Kossoff's words in his excellent book of the same name, "The Twenty-Six Words that Created the Internet":

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

The impact of this immunity on the growth of Internet communications platforms cannot be overstated; it is hard to imagine what the entire social media ecosystem would look like if platforms could be held liable for hosted third-party content. But the Section 230 immunity has become very controversial—to put it mildly—over the last decade; many commentators and lawmakers, from the political left, right, and center, have proposed substantially narrowing, or even eliminating, the immunity, blaming it for everything from the proliferation of hate speech and fake news to the supposed suppression of political commentary from the right wing.

By now, Stewart Baker suggests, "everyone hates Silicon Valley and its entitled content moderators [and] its content suppression practices." Gonzalez, he continues, signals that "Big Tech's chickens are coming home to roost, … the beginning of the end of the house of cards that aggressive lawyering and good press have built for the platforms on the back of section 230."

Maybe.  I happen to be one of those people who do not "hate Big Tech's content moderation practices"—but I'll save my thoughts on that for a future analysis of the Gonzalez case.

The second set of cases (Moody v. Netchoice (Florida) and Netchoice v. Paxton (Texas)) raises a number of questions that are, if anything, of even greater significance for Internet speech than those the Court will be tackling in Gonzalez.

Florida and Texas have both enacted laws which, broadly speaking, prohibit social media platforms from engaging in viewpoint-based content-removal or content-moderation, and from "de-platforming" users based on their political views.(**1)

The 11th Circuit struck down Florida's law on First Amendment grounds—correctly, again in my view. The 5th Circuit, on the other hand, upheld the Texas statute against a similar First Amendment challenge. Cert petitions have been filed and, in light of the rather clear circuit split on a very important question of constitutional law, I predict that the Court will consolidate the two cases and grant certiorari.

The question at the heart of both cases, on which the two opinions reach opposite conclusions, is this:  Are the social media platforms engaged in constitutionally protected "speech" when they decide whose content, and what content, they will disseminate over their systems?

The 11th Circuit held that they are.

"The government can't tell a private person or entity what to say or how to say it…. The question at the core of this appeal is whether the Facebooks and Twitters of the world—indisputably 'private actors' with First Amendment rights—are engaged in constitutionally protected expressive activity when they moderate and curate the content that they disseminate on their platforms."

The State of Florida insists that they aren't, and it has enacted a first-of-its-kind law to combat what some of its proponents perceive to be a concerted effort by "the 'big tech' oligarchs in Silicon Valley" to "silenc[e]" "conservative" speech in favor of a "radical leftist" agenda…

We hold that it is substantially likely that social-media companies—even the biggest ones—are "private actors" whose rights the First Amendment protects, that their so-called "content-moderation" decisions constitute protected exercises of editorial judgment, and that the provisions of the new Florida law that restrict large platforms' ability to engage in content moderation unconstitutionally burden that prerogative. [emphasis added]

The Fifth Circuit, on the other hand, in upholding the Texas statute (by a 2-1 majority, with Judge Southwick dissenting), held that the platforms are not engaged in "speech" at all when they make their content-moderation decisions (which the court labels as "censorship"):

Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say. . . .

The Platforms contend that [the Texas statute] somehow burdens their right to speak. How so, you might wonder? The statute does nothing to prohibit the Platforms from saying whatever they want to say in whatever way they want to say it. Well, the Platforms contend, when a user says something using one of the Platforms, the act of hosting (or rejecting) that speech is the Platforms' own protected speech. Thus, the Platforms contend, Supreme Court doctrine affords them a sort of constitutional privilege to eliminate speech that offends the Platforms' censors. We reject the Platforms' efforts to reframe their censorship as speech….

It is undisputed that the Platforms want to eliminate speech—not promote or protect it. And no amount of doctrinal gymnastics can turn the First Amendment's protections for free speech into protections for free censoring….

We hold that [the Texas statute] does not regulate the Platforms' speech at all; it protects other people's speech and regulates the Platforms' conduct. [emphasis added]

The split seems pretty clear, and I'd be very surprised if the Court doesn't see it that way and grant cert to clear things up.

As if the 1st Amendment questions in the Netchoice cases weren't difficult and complicated enough, there's another significant issue lurking here that make these cases even more intriguing and important. What gives the State of Texas the right to tell a Delaware corporation whose principal place of business is in, say, California, how to conduct its business in regard to the content it may (or must) publish?  Doesn't that violate the principle that State power cannot be exercised extra-territorially? Doesn't the so-called "dormant Commerce Clause" prohibit the individual States from prescribing publication standards for these inter-State actors?

Those, too, are difficult and rather profound questions that are separate from the 1st Amendment questions raised by these cases, and I'll explore them in more detail in future posts.

Finally, one additional small-ish point, a rather interesting doctrinal connection between the statutory issues surrounding Section 230 in Gonzalez and the constitutional issues in the Netchoice cases.

The 5th Circuit, in the course of holding that content moderation by the social media platforms is not constitutionally-protected "speech," wrote the following:

We have no doubts that [the Texas statute] is constitutional. But even if some were to remain, 47 U.S.C. § 230 would extinguish them. Section 230 provides that the Platforms "shall [not] be treated as the publisher or speaker" of content developed by other users. Section 230 reflects Congress's judgment that the Platforms do not operate like traditional publishers and are not "speak[ing]" when they host user-submitted content. Congress's judgment reinforces our conclusion that the Platforms' censorship is not speech under the First Amendment.

Pretty clever!  Congress has declared that the platforms are not "speaking" when they host user content, so therefore what they produce is not protected "speech." The platforms, in this view, are trying to have their cake and eat it, too—"we're not a speaker or publisher" when it comes to liability, but "we are a speaker/publisher" when the question is whether the State can tell them what to do and what not to do.

It's clever, but too clever by half.  Section 230 was actually—indisputably—Congress' attempt to encourage the sort of content moderation that the 5th Circuit has placed outside the ambit of the 1st Amendment. It was enacted, as the 5th Circuit panel itself recognizes, to overrule a lower court decision (Stratton Oakmont v. Prodigy) that had held an Internet hosting service (Prodigy) secondarily liable for defamatory material appearing on its site. The Stratton Oakmont court reasoned that Prodigy, precisely because it engaged in extensive content-moderation, was acting like a traditional "publisher" of the 3d-party content on its site—exercising "editorial control" over that material—and should, like traditional "publishers," be held liable if that content was defamatory.

If engaging in content moderation makes you a "publisher" subject to defamation liability, the result, Congress recognized, would be a lot less content moderation, and Section 230 was designed to avoid that result.  Not only does Section 230(b)(4) declare that the "policy of the United States" is to "remove [such] disincentives for the development and utilization of blocking and filtering technologies," it further provided that

"No provider … of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable."

Section 230 was expressly designed to encourage platform content moderation. The idea that Congress chose to effect that purpose by declaring content moderation and editorial control to be "not-speech" and thereby outside the protections of the 1st Amendment constitutes "doctrinal gymnastics" of the highest order.

*******************************

**1.  The two statutes are substantially similar, though they differ in their details.  The Florida law applies to "social media platforms," defined as

"Any information service, system, Internet search engine, or access software provider that provides or enables computer access by multiple users to a computer server, including an Internet platform or a social media site[;] does business in the state; and has annual gross revenues in excess of $100 million [OR] has at least 100 million monthly individual platform participants globally."

[The law as originally enacted, rather hilariously, expressly excluded any platform "operated by a company that owns and operates a theme park or entertainment complex," but after Disney executives made public comments critical of another recently enacted Florida law, the State repealed the theme-park-company exemption.]

The Florida law declares that social media platforms:

  • "may not willfully deplatform a candidate for office";
  • "may not censor, deplatform, or shadow ban a journalistic enterprise based on the content of its publication or broadcast";
  • "must apply censorship, deplatforming, and shadow banning standards in a consistent manner among its users on the platform"; and
  • "must categorize its post-prioritization and shadow-banning algorithms and allow users to opt out of them, and for users who opt out, the platform must display material in sequential or chronological order."

The Texas law is considerably broader: social media platforms "may not censor a user, a user's expression, or a user's ability to receive the expression of another person based on

  • (1) the viewpoint of the user or another person;
  • (2) the viewpoint represented in the user's expression or another person's expression; or
  • (3) a user's geographic location in this state or any part of this state."

 

The post Content Moderation, Social Media, and the Constitution appeared first on Reason.com.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.