Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Foreign Policy
Foreign Policy
Comment
Nikhil Pahwa, Elizabeth Lange, Doowan Lee

Can Regulation Douse Populism’s Online Fires?

The activist Mike Merrigan holds a piñata shaped like the Twitter logo with hair to look like U.S. President Donald Trump during a protest outside Twitter headquarters in San Francisco on May 28, 2020. Justin Sullivan/Getty Images

The storming of the U.S. Capitol should not come as a surprise to those who have been tracking the impact of social media on activism and political campaigning. The same tools that enabled Barack Obama to reach out to young voters in 2008, Narendra Modi to woo and mesmerize India in 2014, and activists to launch Egypt’s “We Are All Khaled Said” campaign during the Arab Spring have played a critical role in enabling the fostering of hatred that is now shaking up countries. What has happened in the United States can happen anywhere in the world, and many democratic countries are sitting on a tinderbox.

The political actors who actually benefit from manufacturing hate for political benefit are unlikely to disappear. The solution may lie in regulating the medium, not the messenger. Manufacturing hate relies on political actors like Donald Trump—who are always in campaign mode, even in government—drip-feeding new narratives to change mindsets, play on insecurities, and dehumanize “the other.” Alongside true believers, many politicians employ bots, fake accounts, and paid users to spread political messaging across social media platforms, amplified by algorithms designed to value high engagement metrics.

In 2018, U.N. investigators called out Facebook’s “determining role” in enabling Myanmar’s genocide against Rohingyas, saying that the platform had turned into a beast. While Facebook could take refuge then in the excuse that it didn’t have adequate moderators who knew the language and context, the violence is now taking place in its own backyard: the United States.

Platforms often turn a blind eye to the powerful actors who can harm their business interests: YouTube chose not to demonetize hateful propaganda on its platform until advertisers pushed back. In the Philippines, Facebook has been accused of supporting President Rodrigo Duterte, where it has a partnership with the government for undersea cables. Last year, Facebook’s top policy executive in India was accused of advocating against banning the account of a politician from the ruling Bharatiya Janata Party, who was inciting violence, fearing political repercussions.

The most significant concern is that if platforms actually exercised bias, they could have the ability to change the course of local and global politics. It is thus incumbent on countries to ensure that platforms with electoral impact are held to a higher level of scrutiny and there’s some accountability for their actions and inaction.

Yet while campaigners want hate speech taken down, we don’t want censorship of free speech. Legal provisions referred to as “safe harbor” (covered by Section 230 of the Communications Decency Act in the United States, a frequent target of Trump) create an interesting challenge for lawmakers.

A critical aspect of protecting free speech online is the need to protect the platforms that enable public speech. It is humanly impossible for platforms to police the billions of pieces of audio, visual, and textual content being uploaded on the internet each day, and algorithms are not yet capable of regulating context-dependent speech. Safe harbor provisions ensure that platforms are not held liable for what they cannot police with certainty, especially because they would never survive that liability. The situation is further complicated by the fact that the provisions apply equally to content platforms like YouTube and internet service providers (ISPs) like AT&T.

Safe harbor provisions also allow platforms—as private parties—to enforce their own restrictions on speech, referred disarmingly to as “community guidelines.” Thus, Facebook may not be liable for a hateful post from a user, but it can choose to take it down because it might violate its community guidelines. An ISP may ban malicious spam code from its network. The obvious challenge with this formulation is that implementation will be inconsistent.

Community guidelines and their implementation evolve—with time, media and regulatory scrutiny, and shifts in power—but that evolution is often too late and too unreliable.

It is clear that we cannot expect platforms to always act in the best interest of society. We also cannot risk giving too much power to governments nor risk enforcement of ruling-party bias through regulation. There thus needs to be a rules-based and unambiguous approach toward regulating speech on platforms. All platforms cannot be treated as equal: AT&T cannot be handled in the same manner as YouTube.

Regulators and activists therefore need to find a middle path and close the gap between the responsibility and the accountability of those platforms that have a direct impact on electoral integrity.

Platforms have the ability to take down content and ban accounts or—as was the case in India—allow these to remain. They could perhaps be held to a certain set of community guidelines, with a reasonable and defined expectation of action when these standards are violated. Provisions may consider a differential need for speed in acting during situations of public emergency where there is risk to life and liberty of citizens.

It’s important that these requirements are applicable to both content and digital advertising, given that the reach of advertising is often greater than that of unsponsored content on these platforms.

The same rules for free speech will be applicable across the world: Political, social, and cultural context differs from country to country, and jurisdictional concerns will perhaps need to be addressed by conforming community guidelines to the free speech laws of a particular country.

It’s critical that there be transparency on how platforms take the decision to act—how their algorithms function, how their human moderators deal with complaints, and, most importantly, what the defined chain of command is when it comes to content decisions. There needs to be accountability for enforcement of these guidelines.

However, what makes even these actions tricky are two factors about the nature of speech and how it is regulated. Firstly, incorrect information is often not illegal, and despite their best efforts to determine the accuracy of what they’re posting or sharing, users can be mistaken, and a platform may not be in a position to fact-check it with certainty, at an expected speed.

Dealing with hate speech can be extremely complicated. Messages can be context dependent, use code language, or swap in symbols, which may be difficult to discern for both algorithmic and human moderators. It wouldn’t surprise me to see the development of an entirely new language, just as l33t was, to avoid online censorship. The Chinese government uses literally hundreds of thousands of people, both working for the government and inside private companies, to censor dozens of new terms every day, even with the threat of prison hanging over anti-government posters. Democracies, thankfully, can’t do the same.

Drip-fed political messaging is even more complicated: In India, jokes copied from websites were modified and sent on WhatsApp groups to delegitimize political opponents. An individual message by itself might come across as criticism of a particular community or religion, and not incitement to violence, but it can be a part of a long-term campaign of polarization. On platforms, and especially in private groups, we’ve seen coordinated messaging of conspiracy theories, highlighting certain alleged behavioral aspects and activities of political actors and/or communities, slowly poisoning the minds of a population. We also have to contend with the fact that campaigns on platforms such as WhatsApp are via private communications and thus cannot be held to the same standards as public speech.

These activities are not necessarily illegal and might not fall afoul of any community guidelines. However, their impact is unmistakable: They can lead to mobs collecting and targeting people from the Muslim community in a small town in India or against the Rohingyas in Myanmar or cause a group to storm the Capitol in Washington. They can also lead to a gunman entering mosques in Christchurch, New Zealand, and killing 51 people.

This needs thinking about how to regulate the messenger, not just the medium, and how to deal with coordinated disinformation campaigns. There are local and global call center-like operations with hundreds of paid executives—volunteers that political actors turn to—to manage and polarize people across millions of groups on large platforms. These campaigns are designed with the intent to change the political discourse of a country with disingenuous and disinformational communication strategies using bots, fake profiles, and groups. These coordinated campaigns are no different from those run by Russia before the 2016 U.S. presidential election and perhaps need to be treated differently from the genuine actions of true believers.

All these suggestions are imperfect, and there is no single solution that will solve such a momentous challenge. They need to be discussed and debated and fleshed out. They will have unintended consequences.

But there’s no doubt that democracies have gone beyond the point of no return. Doing nothing is no longer an option.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.