Reading the Fifth Circuit's decision in Netchoice v. Paxton brings me back to the old days of the Volokh Conspiracy. A little bit of context: Back when we were at volokh.com, we introduced open comment threads. For a few years, I spent over an hour a day, every day, moderating Volokh Conspiracy comment threads. I stopped after we moved to The Washington Post in 2014, where comment moderation was up to them. I'm very glad I don't do comment moderation anymore. But my comment moderation experience at volokh.com left a lasting impression.
I think three of those impressions might be relevant to thinking about Netchoice.
First: It is a strange rule of human nature that most people who are moderated in an online forum feel, with great certainty, that they are being censored for their beliefs. Few people think they just went too far, or that they broke the rules. Moderation is usually seen as the fruit of bias. So liberal commenters were positive I deleted their comments or even banned them because this is a conservative blog and we were afraid that liberal truths would pierce through the darkness and show the false claims of conservatives. And conservative commenters were completely confident that I deleted their comments or even banned them because we are liberals trying to prevent conservative truths from exposing liberal lies. It just happened all the time. Moderation led to claims of censorship like day following night.
Second: Content moderation always reflects a message of the moderator. My goal in moderating Volokh Conspiracy comments was just to keep discussions civil. My thinking was that if you can keep comments civil, you will not only encourage better comments but also entice better commenters. And I think experience proved that correct. For a few years there, moderated Volokh comment threads were pretty insightful places to go to look for perspectives on our posts. But moderation always implies some some sort of message. It implies some value or judgment that the site has (or maybe just the primary moderator has) that they want to advance. For example, when I was moderating out uncivil comments and commenters at volokh.com, I didn't care if an opinion was liberal or conservative. But my moderation still expressed a value: A belief in a marketplace of ideas, where we wanted the ideas to be expressed in a way that might persuade. That was the value we (or I) had. It's a process value, but still a value. Moderating was always an effort to further that underlying value we had.
Third, perfect comment moderation is impossible, but you can't let the perfect be the enemy of the good. I wrote above that many moderated commenters believed that they were being censored for their beliefs. A corollary is that many commenters had examples of comments from the other side that had remained up, apparently unmoderated, that to them proved the bias. If you deleted a comment as uncivil, it was common to hear howls of outrage that months ago jukeboxgrad had a substantially similar comment somewhere that is still up, so that under the principles of due process and the Magna Carta it would be despicable to moderate this comment now. The problem was scale. We might have 20 posts a day in those days, as there were a lot of short posts. An average post might get (say) 100 comments, with some getting many more. That was around 2,000 comments to wade through every day. You'd need full time moderators to try to moderate them all, with some sort of legal-like process for adjudicating individual comment moderation decisions. Moderated commenters often seemed to want that—and in some cases, to demand it. But it was just impossible given our day jobs. Moderation was needed to make comment threads worth reading, but the sheer scale of comments made imperfect moderating the best you could do.
The post Thoughts on Internet Content Moderation from Spending Thousands of Hours Moderating Volokh Conspiracy Threads appeared first on Reason.com.