
When the Ninth Circuit Court of Appeals considered a lawsuit against Google in 2020, Judge Ronald M. Gould stated his view of the tech giant’s most significant asset bluntly: “So-called ‘neutral’ algorithms,” he wrote, can be “transformed into deadly missiles of destruction by ISIS.”
According to Gould, it was time to challenge the boundaries of a little snippet of the 1996 Communications Decency Act known as Section 230, which protects online platforms from liability for the things their users post. The plaintiffs in this case, the family of a young woman who was killed during a 2015 Islamic State attack in Paris, alleged that Google had violated the Anti-terrorism Act by allowing YouTube’s recommendation system to promote terrorist content. The algorithms that amplified ISIS videos were a danger in and of themselves, they argued.
Gould was in the minority, and the case was decided in Google’s favor. But even the majority cautioned that the drafters of Section 230—people whose conception of the World Wide Web might have been limited to the likes of email and the Yahoo homepage—never imagined “the level of sophistication algorithms have achieved.” The majority wrote that Section 230’s “sweeping immunity” was “likely premised on an antiquated understanding” of platform moderation, and that Congress should reconsider it. The case then headed to the Supreme Court.
This month, the country’s highest court will consider Section 230 for the first time as it weighs a pair of cases—Gonzalez v. Google, and another against Twitter—that invoke the Anti-terrorism Act. The justices will seek to determine whether online platforms should be held accountable when their recommendation systems, operating in ways that users can’t see or understand, aid terrorists by promoting their content and connecting them to a broader audience. They’ll consider the question of whether algorithms, as creations of a platform like YouTube, are something distinct from any other aspect of what makes a website a platform that can host and present third-party content. And, depending on how they answer that question, they could transform the internet as we currently know it, and as some people have known it for their entire lives.
The Supreme Court’s choice of these two cases is surprising, because the core issue seems so obviously settled. In the case against Google, the appellate court referenced a similar case against Facebook from 2019, regarding content created by Hamas that had allegedly encouraged terrorist attacks. The Second Circuit Court of Appeals decided in Facebook’s favor, although, in a partial dissent, then–Chief Judge Robert Katzmann admonished Facebook for its use of algorithms, writing that the company should consider not using them at all. “Or, short of that, Facebook could modify its algorithms to stop them introducing terrorists to one another,” he suggested.
In both the Facebook and Google cases, the courts also reference a landmark Section 230 case from 2008, filed against the website Roommates.com. The site was found liable for encouraging users to violate the Fair Housing Act by giving them a survey that asked them whether they preferred roommates of certain races or sexual orientations. By prompting users in this way, Roommates.com “developed” the information and thus directly caused the illegal activity. Now the Supreme Court will evaluate whether an algorithm develops information in a similarly meaningful way.
The broad immunity outlined by Section 230 has been contentious for decades, but has attracted special attention and increased debate in the past several years for various reasons, including the Big Tech backlash. For both Republicans and Democrats seeking a way to check the power of internet companies, Section 230 has become an appealing target. Donald Trump wanted to get rid of it, and so does Joe Biden.
Meanwhile, Americans are expressing harsher feelings about social-media platforms and have become more articulate in the language of the attention economy; they’re aware of the possible radicalizing and polarizing effects of websites they used to consider fun. Personal-injury lawsuits have cited the power of algorithms, while Congress has considered efforts to regulate “amplification” and compel algorithmic “transparency.” When Frances Haugen, the Facebook whistleblower, appeared before a Senate subcommittee in October 2021, the Democrat Richard Blumenthal remarked in his opening comments that there was a question “as to whether there is such a thing as a safe algorithm.”
Though ranking algorithms, such as those used by search engines, have historically been protected, Jeff Kosseff, the author of a book about Section 230 called The Twenty-Six Words That Created the Internet, told me he understands why there is “some temptation” to say that not all algorithms should be covered. Sometimes algorithmically generated recommendations do serve harmful content to people, and platforms haven’t always done enough to prevent that. So it might feel helpful to say something like You’re not liable for the content itself, but you are liable if you help it go viral. “But if you say that, then what’s the alternative?” Kosseff asked.
Maybe you should get Section 230 immunity only if you put every single piece of content on your website in precise chronological order and never let any algorithm touch it, sort it, organize it, or block it for any reason. “I think that would be a pretty bad outcome,” Kosseff said. A site like YouTube—which hosts millions upon millions of videos—would probably become functionally useless if touching any of that content with a recommendation algorithm could mean risking legal liability. In an amicus brief filed in support of Google, Microsoft called the idea of removing Section 230 protection from algorithms “illogical,” and said it would have “devastating and destabilizing” effects. (Microsoft owns Bing and LinkedIn, both of which make extensive use of algorithms.)
Robin Burke, the director of That Recommender Systems Lab at the University of Colorado at Boulder, has a similar issue with the case. (Burke was part of an expert group, organized by the Center for Democracy and Technology, that filed another amicus brief for Google.) Last year, he co-authored a paper on “algorithmic hate,” which dug into possible causes for widespread loathing of recommendations and ranking. He provided, as an example, Elon Musk’s 2022 declaration about Twitter’s feed: “You are being manipulated by the algorithm in ways you don’t realize.” Burke and his co-authors concluded that user frustration and fear and algorithmic hate may stem in part from “the lack of knowledge that users have about these complex systems, evidenced by the monolithic term ‘the algorithm,’ for what are in fact collections of algorithms, policies, and procedures.”
When we spoke recently, Burke emphasized that he doesn’t deny the harmful effects that algorithms can have. But the approach suggested in the lawsuit against Google doesn’t make sense to him. For one thing, it suggests that there is something uniquely bad about “targeted” algorithms. “Part of the problem is that that term’s not really defined in the lawsuit,” he told me. “What does it mean for something to be targeted?” There are a lot of things that most people do want to be targeted. Typing locksmith into a search engine wouldn’t be practical without targeting. Your friend recommendations wouldn’t make sense. You would probably end up listening to a lot of music you hate. “There’s not really a good place to say, ‘Okay, this is on one side of the line, and these other systems are on the other side of the line,’” Burke said. More importantly, platforms also use algorithms to find, hide, and minimize harmful content. (Child-sex-abuse material, for instance, is often detected through automated processes that involve complex algorithms.) Without them, Kosseff said, the internet would be “a disaster.”
“I was really surprised that the Supreme Court took this case,” he told me. If the justices wanted an opportunity to reconsider Section 230 in some way, they’ve had plenty of those. “There have been other cases they denied that would have been better candidates.” For instance, he named a case filed against the dating app Grindr for allegedly enabling stalking and harassment, which argued that platforms should be liable for fundamentally bad product features. “This is a real Section 230 dispute that the courts are not consistent on,” Kosseff said. The Grindr case was unsuccessful, but the Ninth Circuit was convinced by a similar argument made by plaintiffs against Snap regarding the deaths of two 17-year-olds and a 20-year-old, who were killed in a car crash while using a Snapchat filter that shows how fast a vehicle is moving. Another case alleging that the “talk to strangers” app Omegle facilitated the sex trafficking of an 11-year-old girl is in the discovery phase.
Many cases arguing that a connection exists between social media and specific acts of terrorism are also dismissed, because it’s hard to prove a direct link, Kosseff told me. “That makes me think this is kind of an odd case,” he said. “It almost makes me think that there were some justices who really, really wanted to hear a Section 230 case this term.” And for one reason or another, the ones they were most interested in were the ones about the culpability of that mysterious, misunderstood modern villain, the all-powerful algorithm.
So the algorithm will soon have its day in court. Then we’ll see whether the future of the web will be messy and confusing and sometimes dangerous, like its present, or totally absurd and honestly kind of unimaginable. “It would take an average user approximately 181 million years to download all data from the web today,” Twitter wrote in its amicus brief supporting Google. A person may think she wants to see everything, in order, untouched, but she really, really doesn’t.