Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Businessweek
Businessweek
Mark Bergen

YouTube Went to War Against Terrorists, Just Not White Nationalists

Unlike many of her colleagues at YouTube, Tala Bardan doesn’t remember the company retreat in June 2017 as a nice long weekend. YouTube employees stayed at the Westin hotel in downtown Los Angeles, enjoyed a private Snoop Dogg performance, and took day trips to a nearby Harry Potter theme park. They drank free drinks. As the partying began that Friday morning, though, Bardan was one of about a dozen unlucky workers that Chief Executive Officer Susan Wojcicki pulled into the hotel’s basement for a sobering meeting about the video site’s problem with terrorists.

Discussions about terrorists were nothing new to Bardan, who worked in a relatively junior position overseas watching violent videos in Arabic for the YouTube division that screened footage categorized as “VE,” company shorthand for violent extremism. (Tala Bardan is a pseudonym used to protect her identity, given her sensitive work.) In the meeting, a top engineer explained that YouTube had decided it would try to eliminate from its site the entire ideology that had given rise to groups such as Islamic State, the Sunni Muslim militant organization. The company would recode YouTube’s promotional algorithm to bury “inflammatory religious and supremacist content.” Policy staff would devise a list of 14 incendiary figures, all Muslim men, who would be banned no matter what they posted.

Bardan’s team was immediately assigned to enforce the new rules, working through the weekend while everyone else was partying. One teammate was awakened at 2 a.m. to deal with a particularly tricky video, fielding calls while squeezed in a hotel bathroom to avoid waking a colleague who was sharing the room.

The source of the urgency wasn’t a mystery. For several months advertisers had been boycotting the service after discovering their commercials appearing on videos of Islamic State allies, neo-Nazis, and other offensive material. The week before, three Islamist extremists had killed eight people in a bloody attack on London Bridge. Reports afterward revealed that one of the murderers drew inspiration from videos of controversial cleric Ahmad Musa Jibril he’d watched on YouTube.

The financial havoc for YouTube’s parent, Alphabet Inc.’s Google, was serious enough that YouTube had to consider the end of the very business model—sharing ad sales with a multitude of video producers—that had turned it into an internet colossus. At one point during the advertiser protest, staff warned online creators that YouTube might have to run commercials only on channels affiliated with established media outlets or record labels, effectively deleting the “you” from YouTube.

“We’re always in a crisis,” Wojcicki said in an emergency meeting that year.

Bardan had already worked at YouTube for several years by the time of the 2017 retreat and was taught its ethos on speech: Leave videos up, so long as they didn’t show or incite over-the-top violence. “This is a platform,” she was told. “We’re not responsible for what’s on it.”

The limits of that philosophy became obvious as YouTube became one of the most visited sites on the internet, and the company has consistently struggled to adjust, according to interviews with dozens of current and former YouTube employees.

Within YouTube’s upper ranks, the 2017 meeting was seen as the first step toward effectively wiping radical Islam from the commercial web and taking all forms of extremism more seriously. Advertisers returned, helping bring its business back from the brink.

But people lower down YouTube’s corporate ladder didn’t see it as such a triumph. Shortly after the staff retreat, White nationalists staged a deadly riot in Charlottesville, Va., a watershed moment in an increase of far-right violence in the US and internationally. A progression of young White men carried out racist attacks, accompanying them with accounts of how they’d been radicalized online.

Three summers after the retreat, Bardan and several colleagues put together a presentation showing the prevalence on YouTube of White supremacists, listing recent deadly attacks in New Zealand, Wisconsin, South Carolina, and Texas. YouTube had spent years developing a formula for dealing with violent extremist content. It had worked. So why was it so reluctant to use it on this threat, too? 

Most of YouTube’s standard fare has long been inoffensive, useful, even joyful—how-to clips, silly skits, and rare footage that wasn’t available on TV. But the company’s attempts to mold this chaos into a functional creative economy have been complicated by some people’s tendency to upload dark or illegal material. The exact boundaries could be tricky to draw. In its early years the company wrote a 70-page manual for its moderators with detailed tips for handling footage of diaper-clad adults or depictions of items being inserted into butts. “Use your judgment!” suggested one page. Another section of the manual called for the removal of content intended to “maliciously spread hate against a protected group.”

Setting standards for bigotry is notoriously difficult. There were videos extolling the wisdom of phrenology or those espousing racist conspiracy theories that were so obscure they flew right past many moderators. During one meeting, Jennifer Carrico, an attorney working on YouTube’s policies, clicked through reams of surreal footage before asking aloud, “What kind of Pandora’s box have we opened?”

Government officials sometimes called for deleting videos from people or groups they described as extremists. YouTube found most of these requests moralizing or naive, and the company’s lawyers believed they couldn’t make blanket decisions to block a subjectively defined group like “terrorists.”

The Arab Spring, when the world saw revolutions unfold on YouTube and social networks, seemed to validate this approach. “Defending access to information means colliding head-on with governments and others who seek censorship of ideas they find dangerous,” Victoria Grand, YouTube’s then-policy chief, said in a 2014 speech.

Islamic State tested that resolve. As the militant group ascended in Iraq and Syria, its members began uploading slick, cinematic propaganda to YouTube. A macabre video from the summer of 2014 showed extremists holding hostage the captured journalist James Foley, moments before he was beheaded. The footage ended with a threat to carry out more killings.

Such content was a nightmare for YouTube. Politicians, particularly in Europe, pressured the platform as never before. When lawmakers in Brussels summoned Google to a hearing on extremism, YouTube was so thinly staffed that it didn’t even have a dedicated policy officer in Europe.

YouTube scrubbed Islamic State clips as quickly as it could. Wojcicki also thought it could confront terrorist content with anti-terrorist narratives. YouTube staff tossed around the old legal maxim: “Sunlight is the best disinfectant.”

It was pressure from advertisers, not politicians, that finally made YouTube overhaul its approach. On Feb. 9, 2017, the Times of London reported that household brands using YouTube’s automated ad system had “unwittingly” sponsored videos from “Islamic extremists, white supremacists and pornographers.” Advertisers, who were already angling for leverage with Google, staged an extended boycott. For months, YouTube throttled ads on any channel that wasn’t tied to a vetted studio, network, or record label—basically, anything that wasn’t already on TV. The situation became so dire that YouTube was unsure if it could ever resume sharing ad payments with independent creators. “Once we get through this, it will turn back around,” Jamie Byrne, a YouTube director, told a group of marquee YouTube creators. “But if we can’t, you know—it’s over.”

There were few complaints about YouTube’s goal to clear out radical Islam; the problems seemed mostly to come from how it did so. “The organization was not prepared,” recalls one former staffer, who described YouTube’s moderation unit as a “dumpster fire” rife with mismanagement and chaos. Shortly after the retreat at the Westin, this staffer sifted through the initial filtering code to discover it included the Arabic word for “Allah,” a blunt hammer that might have wiped out millions of innocent videos.

In 2018, YouTube salespeople booked sponsors for popular Middle Eastern channels during Ramadan, a regional marketing season that one employee described as “the Super Bowl times 30.” But when the holiday arrived, many of the ads were nowhere to be found. It seemed YouTube’s moderators and machines, trained to prevent any commercial ties with extremism, had clumsily removed ads from nearly any video with Islamic imagery or in Arabic.

Eventually, YouTube ironed out these errors. Scandals became less frequent. Ad sales improved. Wojcicki created an Intelligence Desk, a division to spot troubling trends and keep them off YouTube. Islamic State videos effectively disappeared. 

Bardan, the YouTube violent extremism specialist, awoke early on March 15, 2019, to the news that an Australian man had opened fire on a mosque and an Islamic center in Christchurch, New Zealand, taking 51 lives. He livestreamed his murders online from a body camera. Bardan wept, wiped her tears, opened her computer, and began watching the terrorist’s video.

The shooter’s writings, interviews, and digital traces would reveal a fervent belief in the Great Replacement theory, which holds that non-White people pose a genocidal threat to Western civilization. “The individual claimed that he was not a frequent commenter on extreme right-wing sites,” New Zealand’s government concluded in a report, “and that YouTube was, for him, a far more significant source of information and inspiration.” The shooter learned to assemble his firearms from tutorials on YouTube, and his livestream landed there, too.

Immediately after the shooting, YouTube scrambled to remove remnants of the footage. But the platform was flooded with tributes to the murders from hatemongers and trolls. At one point, someone posted a new copy of the footage every second.

“We don’t have enough reviewers,” Jennie O’Connor, a YouTube vice president leading its Intelligence Desk, pleaded overnight. An executive later said the alarming speed of reuploads led some inside the company to suspect that a state actor was involved. (A YouTube spokesperson declined to comment on this.) YouTube decided to cut off all uploads and searches for the shooting, a first in the site’s history. Three months later, YouTube rewrote its hate speech rules to ban videos “alleging that a group is superior in order to justify discrimination, segregation or exclusion.” It prohibited denial or glorification of “well-documented violent events,” like the Holocaust or school shootings, as well as deadly material filmed by the perpetrator.

This caused few waves until the following summer, when George Floyd was murdered. In late June 2020, as racial justice protests swept the US, YouTube purged several inflammatory accounts, including those of former Ku Klux Klan leader David Duke and Stefan Molyneux, a prolific vlogger who’d once received a donation from the Christchurch shooter. (Molyneux later said that he “immediately condemned the New Zealand terrorist” and argued that they didn’t share beliefs.)

From the outside, it looked as if YouTube had awakened to the moment after Floyd’s murder. But executives insist that this was merely the hate speech update from the prior year going into effect, especially since channels weren’t taken down until they had multiple violations. “That just generally takes time,” says O’Connor, the vice president. “It’s not like we pick our heads up and say, ‘Oh my gosh, there’s hate speech on YouTube. We should really get on top of that.’ ” 

As YouTube honed its skill at undercutting Islamic State terrorists, some staff members grew concerned about how blind the platform was to other forms of political extremism. Matthew Mengerink joined YouTube as an engineering vice president in 2015, a rare Muslim in tech leadership. When he searched for the word “jihad” on the site, he found few Muslim extremists, but countless clips from angry Tea Party acolytes and stare-you-dead-in-the-eyes vloggers. A massive network of like-minded channels, many fueled by YouTube’s ad sales, latched on to the right-wing fixation with Europe’s growing Muslim migrant population. Not far from there were video discussions of the Great Replacement theory.

Mengerink worried that YouTube’s recommendation engine fueled this invective. “Anything that surfaces a bias—it will mine that bias like nobody’s business,” he recalls. He suggested rewiring the code to down-rank videos that went up to the line of hate speech rules and to counter videos critical of Muslims, Black Americans, or Jews by recommending alternative viewpoints. Proposals like these, at the time, were dismissed as unreasonable and un-Googley.

Mengerink left the company in 2016. YouTube, after its ad boycotts, would follow his advice and remove millions of controversial videos from its recommendations. Still, others saw an ominous proliferation of extremist content over the next several years. The presentation Bardan helped assemble in 2020 highlighted YouTube’s double standard in moderation. One slide showed stills of clips from known Islamist terror groups or figures offering Quranic recitations or rants against the US, which the rule change in 2017 allowed YouTube to categorize as extremist content. All those videos came down.

Then came a slide from another extreme. It showed thumbnails of a video sermon from a leader of Aryan Nations, a neo-Nazi group, and footage of Jason Kessler, the organizer of the “Unite the Right” Charlottesville rally, discussing “White genocide” on camera. Those remained on YouTube.

The problem, staff argued, was mostly bureaucratic. YouTube had different teams to deal with VE, where videos from Islamic State and its ilk went, and hate speech, where White supremacists wound up. The anti-VE effort had more latitude and resources. A colleague who worked on hate speech once confessed to Bardan that they were so swamped with material they rarely got around to the debatable videos categorized as White nationalist.

The document Bardan worked on included a series of recommendations, including treating content from White nationalists as violent extremism. By the end of the year, she’d left YouTube without ever getting a direct response from leadership.

When YouTube has addressed this issue, its defense echoes lines from other online platforms: Dealing with Islamist extremism is easier because national governments agree on definitions; there are registries and sanctions. YouTube relied chiefly on the proscribed terrorist organization lists from the British and US governments. “On a relative basis, it’s simpler,” O’Connor says. Nothing similar, she adds, exists for White nationalism.

During Donald Trump’s presidential election campaign and subsequent administration, even mainstream political discourse had begun pushing up against YouTube’s content guidelines. How should it handle a US president who had referred to Mexicans as “rapists” and several nations as “shithole countries”? Trump and his allies also spent years accusing Google and Silicon Valley of liberal bias, making it even more uncomfortable for YouTube to make difficult judgment calls. Exhortations for lawsuits and new government regulations around social media moderation increased.

Searching for usable definitions, some people inside YouTube proposed basing its policy on the classifications from the Southern Poverty Law Center, a well-known organization that tracks hate groups and ideologues. But the SPLC had become a prominent Trump adversary, and YouTube’s leaders turned down the proposal because of the political risk, according to two people familiar with the decision. The SPLC “became a dirty word inside YouTube,” recalls one former executive. A YouTube spokesperson says that the SPLC is “not widely accepted as an authoritative voice on hate groups.”

Susan Corke, director of the SPLC’s Intelligence Project, which tracks the far right, said in an email that YouTube had failed to take a proactive role in keeping extremists from abusing its platform and that its minimization of the SPLC’s expertise “once again shows its lack of commitment to its users’ safety.”

This dilemma of how to label hate and terror, and what to do about it, isn’t unique to YouTube, or even to the internet. Since Sept. 11, 2001, many governments have devoted more resources to one dangerous ideology above others. New Zealand’s official analysis of the Christchurch shooting determined that the country’s security apparatus had focused “almost exclusively” on Islamist extremism. It was an approach, the report concluded, “not based on an informed assessment of the threats of terrorism.” 

YouTube says it has substantially improved its response to extremism of all types. It points to clear rules outlawing hate speech and content that promotes violence or hate against protected classes and says it slows the spread of videos that come close to breaking its rules. It has begun to remove certain videos promoting the Great Replacement theory. When people search for the subject on YouTube today, they find a Wikipedia entry on the topic at the top of the search results.

On May 14, when a racist gunman murdered 10 people in a Buffalo supermarket and livestreamed to Twitch, a YouTube rival owned by Amazon.com Inc., YouTube removed reuploads so swiftly that they effectively didn’t exist.

But the tragedy also followed a familiar pattern. The shooter posted an online manifesto that largely mirrored the ideology of the Christchurch shooter, which the company’s critics say it still helps spread. He also watched gun tutorials on YouTube.

Google has continued to encourage content that could steer people who are vulnerable to extremist messaging in a different direction. It has worked with Define American, an immigration advocacy nonprofit that published a report a month before the Buffalo shooting titled The Great Replacement Network analyzing 23 popular YouTube videos it classified as “anti-immigration.” Some came from solo YouTubers, but many were produced by well-funded think tanks (Hoover Institution) and popular conservative media (Fox News). “It’s extremism, but it’s packaged in a way that feels mainstream,” says Shauna Siggelkow, Define American’s director of digital storytelling.

Her group is trying the sunlight approach, sponsoring a video this summer from a YouTube creator unpacking the dangers of the Great Replacement theory. The footage topped 300,000 views in its first month. Collectively the videos the report labeled “anti-immigration” on YouTube have more than 100 million views, and counting.Read next: Truth Social Has a Content Moderation Problem

©2022 Bloomberg L.P.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.