Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Foreign Policy
Foreign Policy
Comment
Bharath Ganesh, Joel Rubin, Bharath Ganesh, Dipayan Ghosh, Joel Rubin, Bharath Ganesh, Dipayan Ghosh

How to Counter White Supremacist Extremists Online

A supporter of President Donald Trump walks with a confederate flag during a protest on Dec. 12, 2020 in Washington. Stephanie Keith/Getty Images

The far-right extremists who invaded the U.S. Capitol on Jan. 6 used mainstream platforms including Facebook to livestream their desecration of the seat of American democracy. The presence of white supremacist online communities suddenly came into sharp focus despite the fact that they had been organizing for such an attack for months. The capitol rioters displayed the flag of Kekistan, a rip-off of the Wehrmacht’s war ensigns used by 4chan posters, and wore clothing that made prominent reference to the QAnon conspiracy theory. Among them, white supremacist militias were also present—with symbols of the Proud Boys, Oath Keepers, and Three Percenters on display along with the Confederate flag.

Despite many warnings of these groups’ extremism and violent intentions, social media companies have been reluctant to remove far-right extremists who repeatedly and deliberately violate their rules from their platforms. This was not the case for jihadis: In 2014 and 2015, social media companies and governments worked together to deplatform them, drastically reducing their capacity to disseminate propaganda and radicalize followers. Undoubtedly, countering the far-right is more complicated—but social media platforms will not act until there is sufficient pressure from governments.

Complexity is not why social media platforms have been reluctant to take action against the far-right. Their reluctance is more cynical. For years, social media platforms tolerated hate, racism, xenophobia, misogyny, and white supremacy because this content engaged its users.

In 2016, internal research at Facebook suggested that of all the users that joined extremist groups on its platform, 64 percent of them did so because of algorithmic recommendations. Executives buried these findings. YouTube executives knew that far-right videos were going viral on its platform, but they refused to act because they prioritized engagement metrics instead. After the “Unite the Right” Rally in Charlottesville, Virginia, in 2017, Twitter took down numerous alt-right accounts, but it left key figures such as Richard Spencer, influencers such as Brittany Sellner (previously Brittany Pettibone), and extremist social movements including Generation Identity on the platform.

While these platforms were failing to counter the far-right, they were working together to counter jihadi exploitation of their platforms. In 2017, Facebook, Microsoft, Twitter, and YouTube formed the Global Internet Forum to Counter Terrorism (GIFCT). They created a database of image and video “hashes” or fingerprints to share across different platforms to identify and take down terrorist content. Through this database, Facebook can identify a fingerprint of a terrorist video and provide it to other platforms to take down the same video if it is posted by one of their users. This hash-sharing database is GIFCT’s flagship instrument for addressing terrorist content online. To date, it has also been used—to a minimal degree—to counter far-right content.


The 2019 massacre of 51 Muslims in Christchurch, New Zealand, was a key turning point in governing the far-right online. The terrorist used Facebook Live to stream himself shooting innocent worshippers at two mosques in the city. Far-right users copied and reposted the video thousands of times, spreading it across various platforms. For the first time, GIFCT mobilized its resources to address far-right extremism, and its database helped to coordinate the removal of these videos.

After the attack, New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron introduced the Christchurch Call, a set of commitments for governments and companies to address the spread of terrorist and violent extremist content online. In a speech to the United Nations General Assembly, Macron laid out the Christchurch Call as an example of a new kind of multilateralism that involves not just other governments but also corporations, platforms, and civil society. The Christchurch Call provided an “impetus for reform” of GIFCT.

GIFCT represents an avenue to ensure that the immensely valuable—and easily manipulable—attention markets that platforms create are not available to extremists who aspire to use them as tools to mobilize violence in the name of domination, oppression, and exclusion. Unfortunately, despite its claim to carry the torch lit by the Christchurch Call, GIFCT barely addresses right-wing extremism.

In order to define what counts as terrorist content, GIFCT relies on the U.N. Security Council’s sanctions list, which identifies specific terrorist groups. While numerous jihadi individuals and groups are on this list, there are no far-right or white supremacist groups recognized by the U.N. as terrorist actors. Ultimately, GIFCT’s reliance on this list ensures that the vast majority of content that gets fingerprinted and shared through GIFCT’s database focuses on jihadism alone.

Adding far-right content to the hash database works through a specific process rather than being caught up in GIFCT’s database. This process is called the “Content Incident Protocol,” which is only triggered in response to an attack. Far-right content is only accounted for through GIFCT’s Content Incident Protocol. In its 2019 transparency report, GIFCT had a separate category solely dedicated to videos related to the Christchurch attack, titled “New Zealand Perpetrator Content,” which constituted 0.6 percent of all hashes in the database. In the recent 2020 transparency report, this figure has increased substantially, with specific categories for “Christchurch, New Zealand Perpetrator Content” (6.8 percent of hashes), content related to the 2019 Halle synagogue shooting in Germany (2 percent of hashes), and a May 2020 shooting in Glendale, Arizona, which was a misogynist attack (0.1 percent of hashes).

While jihadi content—by virtue of being sanctioned by the U.N.—is stored on the database and can be used to preemptively take such content down, GIFCT only takes down far-right content after an attack.


In response to the Capitol attack, Facebook, Twitter, and YouTube took swift action to deplatform outgoing President Donald Trump, who normalized the racism, hate, misogyny, xenophobia, and white supremacy that these online communities propagate. On Jan. 12, Twitter announced it shut down over 70,000 QAnon-related accounts. On Jan. 19, Facebook reported that it took action against thousands of militarized social movement accounts. Yet, despite the evidence that white supremacists planned to kidnap and murder U.S. legislators and actions taken by GIFCT’s own members, it is disheartening that the institution has not released any information on actions taken or issued a statement on members’ activities in the aftermath of the assault on the Capitol.

Despite its many flaws, GIFCT still represents an opportunity to address this problem, given its position as an institution to coordinate content moderation across the industry. However, without pressure from the U.S. government, GIFCT is unlikely to take the necessary action to address far-right content online.

The incoming Biden administration has a window of opportunity to make significant changes to the way in which platforms govern far-right extremism. Applying pressure directly to platforms and on GIFCT is the only way to bring the institution in line with its lofty ambitions of carrying forward the commitments of the Christchurch Call. Countering the far-right online will be a long battle, but there are immediate changes that can be made in the short term.

First and foremost, it is clear that platforms and GIFCT will only take action on far-right extremism when governments provide clear definitions. After the United Kingdom identified the neo-Nazi group National Action as a terrorist group, Britons could no longer access the group’s Twitter account. As Joel Rubin wrote recently in Foreign Policy, designating far-right terrorism as such is a necessary first step in countering it. By clearly identifying organized hate movements in the United States as terrorist threats—which the repeated attacks by actors radicalized by these hate movements have demonstrated—U.S.-based social media platforms will be obligated to take action against these groups. As the legal scholar Evelyn Douek writes, when powerful governments identify terrorist content, social media companies are pressured into taking action.

Second, the Biden administration should take a leadership role in multilateral institutions to address terrorist content online. The administration should add the United States as a signatory to the Christchurch Call, an invitation the Trump administration refused. This would allow the U.S. government to act alongside New Zealand and France, to drive the commitments outlined in the Christchurch Call forward.

Finally, the United States does have a representative on GIFCT’s Independent Advisory Committee. Through this position, the Biden administration can push GIFCT to address its lacking response to the far-right by demanding that GIFCT include far-right groups and ideologies and explicitly identify the far-right in its definition of terrorist content. This would pressure GIFCT’s hash-sharing database (and other tools it is developing, such as a URL database) to account for and challenge far-right content.

By signing on to the Christchurch Call and through leadership in GIFCT, the Biden administration has an opportunity to put pressure on social media companies to take far-right terrorist content and violent extremism seriously. However, this multilateral engagement alone is insufficient.

The U.S. government must take advantage of its ability to hold these platforms accountable. Legislation that defines platforms’ liability for algorithmic recommendation and amplification of racist, hateful, misogynistic, and white supremacist content must be considered. Not only is this in line with the commitments for the Christchurch Call, but recent evidence also shows that even after Facebook signed up to the program, it continued to recommend white supremacist pages to its users on the platform. On Twitter and YouTube, researchers have provided convincing accounts on how content recommendation systems benefit far-right users.

Legislation that seeks to challenge the far-right online must also consider platform advertising services that extremists—and foreign actors—consistently exploit. In the weeks after the insurrection at the U.S. Capitol, the Tech Transparency Project found that Facebook continues to serve microtargeted ads for military gear to users active on militia and patriot groups on the platform. Sens. Tammy Duckworth, Richard Blumenthal, and Sherrod Brown, as well as attorneys general for the District of Columbia, Illinois, Massachusetts, and New Jersey, note in their letters to Facebook about these ads that the company must stop giving “profit precedence over public safety.” On Jan. 16, Facebook responded that it would ban such ads.

While Facebook has taken significant steps toward better regulation of its advertising market and enhanced transparency on political ads, the company has repeatedly failed to take action against far-right extremists exploiting its advertising system. Lawmakers should consider applying strict financial punishments to companies for allowing racism, hate, misogyny, and white supremacist content to be propagated through their advertising markets.

In the wake of one of the most terrifying attacks on U.S. democracy in recent memory, bold action that limits the spread of hate, racism, misogyny, and white supremacy on social media platforms is sorely needed.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.