The article is here; the Introduction:
Over the past several decades, a combination of a laissez-faire regulatory environment and Section 230's statutory protections for platform content-moderation decisions has mostly foreclosed the development of First Amendment doctrine on platform content moderation. But the conventional wisdom has been that the First Amendment would protect most platform operations even if this regulatory shield were stripped away. The simplest path to this conclusion follows what we call the "editorial analogy," which holds that a platform deciding what content to carry, remove, promote, or demote is in basically the same position—with the same robust First Amendment protections—as a newspaper editorial board considering which op-eds to carry.
While formally appealing, this analogy operates at such a high level of abstraction that one might just as plausibly characterize platforms as more akin to governments—institutions whose power over speech requires democratic checks rather than constitutional protection. These competing analogies point in opposite directions: one treats platforms as democracy-enhancing speakers deserving autonomy; the other as institutional censors warranting regulation.
A circuit split over which analogy to follow prompted the Supreme Court's decision last Term in Moody v. NetChoice, LLC. The Eleventh Circuit had invalidated Florida's content-moderation law as an unconstitutional interference with platforms' editorial discretion. The Fifth Circuit upheld Texas's similar law based on the traditional understanding that common carriers—in this case social platforms—are appropriately subject to anti-discrimination requirements.
The Court found both of these stories too tidy.
All the Justices agreed that some platform moderation decisions are "editorial" and speech-like in nature. Yet they also agreed that this protection might vary across platforms, services, and moderation techniques. Unable to resolve these nuances on a sparse record, the Court remanded for more detailed factual development about how these laws would actually operate.
While Moody can fairly be characterized as a punt—merely postponing hard constitutional questions—its very reluctance to embrace categorical analogies marks a significant shift. Simply by characterizing direct regulation of platform content moderation as a complex question that requires close, fact-specific analysis, Moody upsets tech litigants' basic strategy and suggests a more nuanced First Amendment jurisprudence than many expected. Moreover, the Justices' various opinions offer revealing glimpses of why traditional analogies fail to capture platforms' novel characteristics.
This Article examines Moody's implications for platform regulation. Part I traces the development of the First Amendment's protections for "editorial discretion" and the political controversies that prompted the state regulation. Part II analyzes the Justices' competing approaches. Part III explores Moody's immediate impact on litigation strategy, explaining how its skepticism towards facial challenges will reshape tech-industry resistance to regulation, while arguing that the decision leaves surprising room for carefully designed rules that can withstand more focused constitutional scrutiny. Part IV proposes moving beyond editorial analogies to focus on platforms' actual effects on user speech—an approach that we have endorsed elsewhere and that we believe better serves First Amendment values in the digital age.
The post Journal of Free Speech Law: "Beyond the Editorial Analogy: First Amendment Protections for Platform Content Moderation After <i>Moody v. NetChoice</i>," appeared first on Reason.com.