Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Reason
Reason
Politics
Eugene Volokh

Journal of Free Speech Law: "Content Moderation in Practice," by Laura Edelson

The article is here; here is the Introduction:

Almost all platforms for user-generated content have written policies around what content they are and are not willing to host, even if these policies are not always public. Even platforms explicitly designed to host adult content, such as OnlyFans, have community guidelines. Of course, different platforms' content policies can differ widely in multiple regards. Platforms differ on everything from what content they do and do not allow, to how vigorously they enforce their rules, to the mechanisms for enforcement itself. Nevertheless, nearly all platforms have two sets of content criteria: one set of rules setting a minimum floor for what content the platform is willing to host at all, and a more rigorous set of rules defining standards for advertising content. Many social-media platforms also have additional criteria for what content they will actively recommend to users that differ from their more general standards of what content they are willing to host at all.

These differences, which exist in both policy and enforcement, create vastly different user experiences of content moderation in practice. This chapter will review the content-moderation policies and enforcement practices of Meta's Facebook platform, YouTube (owned by Google), TikTok, Reddit, and Zoom, focusing on four key areas of platforms' content-moderation policies and practices: the content policies as they are written, the context in which platforms say those rules will be enforced, the mechanisms they use for enforcement, and how platforms communicate enforcement decisions to users in different scenarios.

Platforms usually outline their content-moderation policies in their community guidelines or standards. These guideline documents are broad and usually have rules about what kinds of actions users can take on their platform and what content can be posted. These guideline documents often also describe the context in which rules will be enforced. Many platforms also provide information about the enforcement actions they may take against content that violates the rules. However, details about the consequences for users who post such content are typically sparse.

More detail is typically available about different platforms' mechanisms for enforcement. Platforms can enforce policies manually by having human reviewers check content for compliance directly, or they can employ automated methods to identify violating content. In practice, many platforms employ a hybrid approach, employing automated means to identify content that may need additional human review. Whether they employ a primarily manual or primarily automated approach, platforms have an additional choice to make regarding what will trigger enforcement of their rules. Platforms can enforce their content-moderation policies either proactively by looking for content that violates policies or reactively by responding to user complaints about violating content.

Platforms also have a range of actions they can take regarding content found to be policy-violating. The bluntest tool they can employ is simply to take the content down. A subtler option involves changing how the content is displayed by showing the content with a disclaimer or by requiring a user to make an additional click to see the content. Platforms can also restrict who can see the content, limiting it to users over an age minimum or in a particular geographic region. Lastly, platforms can make content ineligible for recommendation, an administrative decision that might be entirely hidden from users.

Once a moderation decision is made, either by an automated system or by a human reviewer, platforms have choices about how (and whether) to inform the content creator about the decision. Sometimes platforms withhold notice in order to avoid negative reactions from users, though certain enforcement actions are hard or impossible to hide. In other instances, platforms may wish to keep users informed about actions they take either to create a sense of transparency or to nudge the user not to post violating content in the future.

The post Journal of Free Speech Law: "Content Moderation in Practice," by Laura Edelson appeared first on Reason.com.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.