In 2020, Facebook appointed what it refers to as its “Supreme Court”—a quasi-independent council called the Facebook Oversight Board—to review controversial content moderation decisions. This week, the Oversight Board handed down its most important judgment to date: support for the company’s move to ban former U.S. President Donald Trump from posting on Facebook and Instagram a day after the Jan. 6 Capitol insurrection. But the board also called on Facebook to review that decision within six months, in order to ensure the deplatforming was “necessary and proportionate,” and to consider making Trump’s a time-limited ban.
This is because Facebook’s current policies allow only for suspending account holders for a stipulated period of time or permanently banning them. Facebook routinely issues permanent bans to repeat offenders, and Trump’s misuse of the platform—which he used to spread falsehoods, dispute the results of the 2020 election, and ultimately egg on an insurrection—was arguably more egregious and more damaging than many users banned before him. But, as former Facebook executive Richard Allan has previously revealed, the U.S. president was always going to receive special treatment compared to, say, the president of Brazil. “The greater the politics, the more reluctance there is,” he said, adding that “if it’s a president of a … far-off country, the decision is more likely to come down in favor of removing their content.”
This helps explain why Facebook CEO Mark Zuckerberg threw the Trump-shaped hot potato to the Facebook Oversight Board—he thought he could outsource the difficult decision about the nature and length of the ban to others. But the board has thrown that potato right back into Zuckerberg’s lap. In its nonbinding decision, the board deflected, stating: “In applying a vague, standardless penalty and then referring this case to the Board to resolve, Facebook seeks to avoid its responsibilities. The Board declines Facebook’s request and insists that Facebook apply and justify a defined penalty.”
For years, Facebook has abrogated responsibility for addressing viral disinformation and networked hate speech—even as their impacts spilled offline, leading to physical violence and preventable deaths—avoiding accountability by hiding behind the fig leaf of free speech absolutism. But then came the Jan. 6 insurrection, and finally the company acted against then-President Trump, deplatforming him indefinitely from Facebook and Instagram the following day.
It could be argued that the Oversight Board was itself avoiding a thorny decision. For example, it could have suggested a permanent ban—in line with existing Facebook policies—considering the serious risks to democracy and human life associated with Trump’s weaponization of the platform.
These are the sorts of editorial and curatorial decisions made by journalists and editors every single day, all around the world. But there is an ongoing resistance within Facebook (and other internet communications companies) to assume the editorial-style gatekeeping functions associated with journalism—the free and independent practice of which is protected under international human rights law because of its service to democracy and transparent, accountable governance. All this underscores the gulf between the values, ethics, and professional frameworks of Big Tech and journalism.
So, if Facebook isn’t a news organization, what is it? Early last decade, Zuckerberg had an idea: What if Facebook were a country? This led to countless utopian news stories and data visualizations involving representations of well-educated, digitally empowered, democratically engaged, and globally networked “citizens.” A World Economic Forum video from 2017 described Facebook as more populous than China, while comparing WhatsApp and Instagram—both also owned by Facebook—to India and the United States, respectively. And some commentators went as far as portraying Zuckerberg as a benevolent dictator of this digital “country.”
Since then, the platform’s design and business model flaws have made COVID-19 disinformation viral, enabling what some have termed the “disinfodemic”; United Nations investigators have accused the platform of playing a determining role in the suspected Rohingya genocide; celebrated Filipina American journalist Maria Ressa accused the company of being complicit in her persecution, prosecution, and conviction in Manila; and a defeated U.S. president used the platform to fuel an insurrection at the U.S. Capitol. If Facebook were a “country,” it could be cast today as a rogue state. But—obviously—Facebook is not a country.
The Facebook Oversight Board has told the company to apply the tests of necessity, transparency, and proportionality in deciding if Trump will be consigned to a Facebook memory. Such requirements are features of what are considered legitimate restrictions on freedom of expression in exceptional circumstances under international human rights law as it applies to states. They are designed to avoid infringement of Article 19 of the Universal Declaration of Human Rights, which says that “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.”
But because Facebook is a company, not a country, it is not bound by the Universal Declaration of Human Rights, nor the International Covenant on Civil and Political Rights, which deals with hate speech at the U.N. level. The company can therefore make decisions to act in the interests of human rights—individual and collective—without recourse to such laws, while still being guided by the principles underpinning them. In other words, while a state cannot legitimately permanently mute a citizen, a corporation can remove a user from its service—especially if they’re routinely abusing the terms of use, injuring other citizens’ rights in the process, and threatening democracy.
As we have documented in Balancing Act, our book on global disinformation responses published by UNESCO and the U.N. Broadband Commission in 2020, Facebook’s policies and standards regarding content moderation are somewhat opaque and slippery. The company has consistently sought to evade responsibility for decisions on politically sensitive cases, relying on ill-defined exemptions for action on disinformation if the content is categorized as political advertising or opinion, for example. These were points highlighted by the Oversight Board itself. “Facebook should publicly explain the rules that it uses when it imposes account-level sanctions against influential users,” the board’s decision on Trump’s ban reads.
Now the social network is in the difficult position of having to either ignore its own Oversight Board, ban Trump permanently, or reinstate the former president after a specified period based on yet-to-be developed protocols and policies pertaining to the deplatforming of users indeterminately.
As Zuckerberg tries to avoid burning his hands on that hot potato, he could find guidance in the U.N.-commissioned Ruggie principles, designed to prevent corporations from undermining human rights; the U.N. Guiding Principles on Business and Human Rights; and the Rabat Plan of Action, which is intended to guide companies seeking to balance freedom of expression rights against the need to curtail incitement to hatred. He might also find value in our 23-step protocol for assessing responses to disinformation while respecting freedom of expression, published by UNESCO in 2020. And the U.N.’s Office of the High Commissioner for Human Rights is facilitating the B-Tech Project, which is designed to synthesize such guidelines and tools for practical application to gnarly human rights questions in connection with business and technology.
It still feels like Facebook is making things up as it goes along, while continuing to actively resist even incremental changes to its highly profitable, but incredibly damaging, business model. This is what happens when a corporation grows too big too quickly, with a stated desire to “break things,” and without due regard for human rights or democracy. But at least there is now a shard of much-needed transparency in the process, along with a whiff of accountability. Facebook may not be a country, but it must curate safe, inclusive communities on its platforms that do not undermine human rights or threaten the very existence of democracy—unless it wants to be cast as a failed state.