The Supreme Court’s decision on two cases challenging social media content moderation policies could expand protections for tech platforms under the First Amendment umbrella even if Congress were to dilute other protections, according to legal experts closely watching the issue.
Companies posting user content on the internet, like Meta Platforms Inc., enjoy a broad shield under Section 230 of a 1996 law that protects tech against liability for such content. Lawmakers who want such platforms to rein in harmful content have threatened to revoke that section and force stricter moderation of what gets uploaded.
But the court’s decision last week, which remanded the Florida and Texas cases to lower courts, opens the door to broader, more fundamental cover from the First Amendment, even as the ruling expressly avoids declaring social media posts to be free speech.
“At the end of the day, a lot of the content that’s protected by Section 230 would also be protected by the First Amendment, and that includes choices made by social media services to take down or not cut down particular content,” said Samir Jain, vice president of policy at the Center for Democracy & Technology.
“What the court is saying here is that those are protected by the First Amendment,” Jain said in an interview, referring to how companies moderate content on their platforms. “And that would be true even if Section 230 didn’t exist.”
The Texas and Florida laws were part of the pushback to perceived censorship of conservative views by tech companies, including Meta, Google parent Alphabet Inc. and others. The laws required the platforms to offer a detailed explanation and appeals process when users or their content were blocked. Tech companies sued to end the laws.
The U.S. Court of Appeals for the 11th Circuit enjoined parts of the Florida law on First Amendment grounds, while the 5th Circuit upheld the Texas law but stayed its enforcement pending appeal. Both states appealed to the U.S. Supreme Court.
Justice Elena Kagan criticized the 5th Circuit decision that upheld the Texas law, writing for a six-justice majority that social media content moderation is free speech protected by the First Amendment to the Constitution.
“Deciding on the third-party speech that will be included in or excluded from a compilation — and then organizing and presenting the included items — is expressive activity of its own,” Kagan wrote.
Newsroom or town square
Although privacy advocates opposed the Texas and Florida laws, some were alarmed by the majority opinion likening actions of social media companies to those made in newsrooms.
Kagan’s views are “disappointing, because it analogizes social media platforms to the editorial work of newspapers,” said Fordham Law professor Zephyr Teachout, a senior adviser at the American Economic Liberties Project.
“As we argued in our amicus brief, and as noted in today’s concurring opinions, social media platforms are more like town squares,” Teachout said in a statement. “The First Amendment is not a shield for censorship and discrimination in the town square, and it shouldn’t protect against discrimination and targeting by opaque algorithms.”
Expanding First Amendment protections to include content moderation and curation by tech companies could potentially result in protections even in cases where no human judgment is involved, according to Tim Wu, a law professor at Columbia University who previously served as a senior White House official on tech policy.
“The next phase in this struggle will presumably concern the regulation of artificial intelligence,” Wu wrote in a July 2 op-ed in The New York Times. “I fear that the First Amendment will be extended to protect machine speech — at considerable human cost.”
Algorithms that are currently used by tech companies to determine which posts and content are allowed and which are taken down are merely automated versions of human choices, Jain argued.
Jain offered the example of computer code screening and flagging users’ posts for certain terms and phrases considered by the tech platforms to be hateful speech. “Even though it’s an algorithm in some ways, implementing human decision,” he said.
In the context of protecting Americans’ data privacy, some members of Congress have been mulling ways to curb the broad liability protections that tech companies enjoy because of Section 230.
In recent months, top House lawmakers including Energy and Commerce Chair Cathy McMorris Rodgers, R-Wash., and ranking member Frank Pallone Jr., D-N.J., have held hearings on sunsetting such protections by the end of 2025.
“As written, Section 230 was originally intended to protect internet service providers from being held liable for content posted by a third-party user or for removing truly horrific or illegal content,” Rodgers said at a committee hearing in May. But giant social media platforms have been “exploiting this to profit off us and use the information we share to develop addictive algorithms that push content onto our feeds.”
Some fear the emergence of powerful artificial intelligence systems that could potentially make decisions on their own without human direction will complicate the question of First Amendment protections for content moderation.
According to Jain, Justice Amy Coney Barrett, in her concurring opinion, raised the question of a future where tech companies could develop an artificial intelligence tool whose job is to figure out what is hateful, what isn’t and whether a human really is making an expressive choice protected by the First Amendment.
“That’s a question the [justices] don’t answer,” Jain said.
The post Ruling boosts social media free speech protections, some say appeared first on Roll Call.