In removing U.S. President Donald Trump from their services, Big Tech companies like Twitter and Facebook have firmly established the principle that they own their platforms and they will decide who gets to speak there. This brings them into uncharted territory, raising profound questions about freedom of speech and corporate responsibility. Social media platforms create the illusion of being public spaces, but they are what they are: private property. They can set rules of behavior, and just as the owner of a bar can kick out an unruly patron, they have now shown that they can remove people who do not follow their rules.
That principle seems fine, but the question is how to apply it consistently. Trump’s Jan. 6 speech incited his supporters to march to the U.S. Capitol—an unprecedented attack—while legislators went through the formalities of declaring Joe Biden the winner of the 2020 presidential election. But Trump’s call to action was hardly the first time he had said something incendiary on social media. So the explanations and justifications of the companies in removing him sounded self-servingly cynical; like a weather vane, they appeared to be responding to the shift in the political wind.
Trump has made many remarks in the past that have either condoned or threatened violence, such as when he spoke after the events in Charlottesville, Virginia, in 2017 and during the Black Lives Matter demonstrations last year. But Big Tech leaders portrayed themselves as upholders of free speech and insisted that they ran platforms, not news organizations. Yet their record in upholding free speech in other countries has been spotty. They have blocked accounts of politicians, human rights defenders, and journalists who are inconvenient for some governments, and they have allowed politically powerful actors to vilify and dehumanize minority groups, while claiming to have zero tolerance for hate speech.
By removing Trump, they have created a precedent—that when the political ground shifts, the companies will shift their position. Applying such rules consistently is difficult even within a country; trying to apply such rules worldwide is impossible for companies, which do not have the mandate, expertise, capacity, or authority to safeguard a right as contested and fragile as free speech.
Big Tech companies do own their respective platforms. They have the right to devise terms of service and standards. Those are decided unilaterally, even if some companies undertake an elaborate exercise of consulting affected stakeholders. But their actions do not follow due process, and they rarely explain their decisions clearly. Information about how those policies are developed, how they are implemented, whether there is an appeal process, who judges those appeals, and what redress and remedies are available to the user is not easily accessible, nor do the companies have the resources to respond to each complainant.
Arbitrariness is the norm, not the exception. While social media companies have been prompt in banning right-wing people in the United States, such as Milo Yiannopoulos and Alex Jones, elsewhere in the world, it is liberals and human rights defenders who have been suspended and the right-wing voices that have had a field day. They appear to bow to power, not to ideology.
Facebook has been criticized for the way its platform was used to disseminate hate against minorities in Myanmar, and it admitted it was used to incite violence. It then blocked accounts of Myanmar’s military. Facebook has also apologized for its role in anti-Muslim riots in Sri Lanka in 2018, and its hate speech policies have come into conflict with its business priorities in India, its largest market by the number of users. Twitter has been accused of being complicit in the Indian crackdown on media reporting from Kashmir. As Russian opposition leader Alexei Navalny pointed out in a series of tweets criticizing Twitter’s ban on Trump, the platform has been used by authoritarian leaders without any restraint. Navalny has himself received threats, but Twitter has not acted (nor has he complained). Twitter also blocked Sanjay Hegde, an Indian lawyer, who had posted an image of a sole dissenting German who refused to salute Hitler, because of Twitter’s policy against glorifying Nazi ideology. (The image actually celebrates the man who stood alone.)
I have experienced the Twitter suspension process firsthand: Early last month, after I posted a poem that I had written a decade ago lamenting the destruction of Babri Masjid in India in 1992, I was suspended for about two days. Several leading writers expressed outrage. A Hindu nationalist user claimed credit for having me removed; Twitter later told me I was suspended because I had made a list of Twitter accounts to follow whose title violated Twitter’s abuse policies (which were never explained to me). I could not change the title; I had to remove the list to get reinstated. Eventually, I managed to transfer the data to another list, renamed it, and was able to return some 36 hours later.
What these experiences show is the uneven application of the policies social media companies apply in different contexts. Their actions are unilateral, are often arbitrary, and do not follow due process. It is no surprise that Parler is suing Amazon over the social platform’s removal from Amazon Web Services. Due process is the key, as the web services company Cloudflare explained when it terminated the account of the neo-Nazi Daily Stormer website in 2017.
Trump’s speech was certainly a call to action. Whether it was inciting violence is for the lawyers to debate, but those rejoicing in Trump’s departure from social media should ask themselves what happens if leaders they support are removed from these platforms without due process.
It seems simple: A shopping mall is within its rights to remove displays that might offend its customers, such as graphic anti-abortion posters or in-your-face pro-abortion rights messages in a society as divided as the United States over a woman’s right to abortion. Like that mall, which wants footfall and prolonged stay of shoppers, social media companies want users to stay on their site for long periods and have a wholesome experience that reassures them, so that they click the advertisements and buy the products.
But Facebook and Twitter are near-monopolies. Sure, alternatives exist, but those alternatives simply do not have the critical mass to become viable competitors. Parler is now homeless, and it is not the first time Amazon Web Services has acted—in 2010, it removed WikiLeaks from its servers. A similar fate might fall on Mastodon, a social network favored by many on the left, particularly in India.
In the United States, the First Amendment stops the government from restricting freedom of speech or press, but that restriction does not apply to private entities. When that private entity is a near-monopoly, as is the case with most of Big Tech; if state-run or other private alternatives do not exist; and if that dominant private entity appears to do the bidding of powerful interests at the behest of lawmakers, then the line dividing the state and the private sector gets blurred.
Big Tech claims it acts to stop hate speech from being spread too widely. Academic Susan Benesch has thought deeply about the issue, and she distinguishes hate speech from dangerous speech. The former should be permissible; the latter may need regulation. In defining dangerous speech, she says, if a speaker is able to influence a large following, if the audience is susceptible and does not have access to nor believes in accurate information, if the language dehumanizes a group, if the speech encourages the audience to nurture its grievance, if it speaks of ethnic purity and undermines outsiders for polluting the purity, and if the language is coded, like a dog whistle, using imagery that carries special meaning for the audience, then that speech is dangerous. As the writer Seth Abramson noted in a 200-tweet-long thread, Trump’s speech was dangerous. Even the Wall Street Journal, no fan of the Democrats, believed Trump’s speech was impeachable.
But on what basis do companies decide that particular speech represents such a “clear and present” danger? Outraged by tech companies’ power, several Republican leaders have echoed Trump’s call to remove the protection social media platforms enjoy under Section 230 of the Communication Decency Act, which grants them immunity from prosecution because they are deemed to be carriers of data, not publishers. And yet, by developing their standards and policies and implementing those, the companies do, in fact, exercise some editorial control over the content they carry.
They cannot have it both ways. The Trumpian solution—removing Section 230—is facile; without the provision, it is possible that he would have been removed from social media much earlier.
Companies need sound rules and due process. We live in a global village; the rules must be applied fairly, everywhere. As David Kaye, a former United Nations special rapporteur for the right to freedom of expression, argues in his book, Speech Police, well-intentioned companies need to work with rights-respecting governments and free speech and human rights experts in a coordinated, multi-stakeholder approach, to understand what free speech means, what hate speech means, what limits might apply, and how those are to be applied—consistently and without discrimination.
That’s an impossible task for a private company to perform on its own. Newspapers and magazines know how to get it right. They do have internal controls to decide what can appear on their platform, including publishing all points of view expressed clearly, and publishing news that has been fact-checked and verified. Social media companies have more than enough resources to invest in an infrastructure that allows them to act like what they de facto are—publishers and editors. They crave users, credibility, and trust.
They have to earn those.