Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Foreign Policy
Foreign Policy
Comment
Dipayan Ghosh, Dipayan Ghosh

Taking Trump Down Has Exposed Social Media’s Inherent Contradictions

The suspended Twitter account of U.S. President Donald Trump appears on a laptop screen in San Anselmo, California, on Jan. 8. Justin Sullivan/Getty Images

Recent events have forced leading American tech firms into a precarious choice: side with the outgoing president whose self-serving, divisive, and false messages fueled a riot, or with the incoming one who has emphasized his desire to bring the country back together?

In the weeks preceding the alarming events of Jan. 6, when supporters of outgoing President Donald Trump stormed the U.S. Capitol, the president persistently and falsely suggested that the November 2020 presidential election had been rigged and stolen from him by Democratic opponent Joe Biden. But Trump had a ready-made audience for falsehood: nearly 90 million followers on Twitter, as well as other forms of social media engagement including videos relating to the Capitol riot uploaded online.

As such, there has been tremendous pressure on leading internet companies to take a strong stand against Trump, who favored the internet above all other forms of communication. Stated in those terms, it feels obvious that companies should take down Trump’s content—which many have equated to “siding” with Biden and the liberal faction. This appears to be the only option that simultaneously protects the safety, security, and democratic interests of the public.

But there have still been stark differences among the firms. Facebook is still holding off from a permanent ban, most noticeably. While Twitter and others have thrown him off their platforms for good, Facebook has only suspended him “indefinitely”—a position that Chief Operating Officer Sheryl Sandberg took plenty of flak for. These differences—and the last-minute nature of the action—suggest the decision wasn’t an easy one.

And it isn’t easy. A ban of any prominent individual—let alone the single most powerful man in the world—creates internal staff consternation and public controversy for internet platforms, especially those with names recognizable to the everyday consumer. Such bans implicate the inherent nature of internet platforms coming to serve as platforms, entities that operate a basic service enabling social interaction as a general utility for global society.

The guiding principle was—and continues to be—promotion of the individual right to freedom of political expression. But while some in Silicon Valley do hold a genuine belief in freedom of speech, this most American of civil liberties was also the perfect shield to hide the aspects of social media platforms’ internal business models that makes them vulnerable to use as conduits for misinformation and disinformation. That model, broadly shared across all major platforms, emphasizes the uninhibited collection of personal data to the end of behavioral profiling alongside the development and implementation of opaque but highly sophisticated algorithms that organize social content and target ads at audience segments.

There are a lurking set of not just economic but also political incentives at play beneath the surface of every one of our interactions with digital media. Continuing to cling to the free speech argument—as Facebook chief executive Mark Zuckerberg did in late 2019—obviates immediate threats of regulation by a president who makes rash economic decisions on a whim and disseminates content that pushes the borders of free speech itself. Hiding behind the excuse of free speech helps avoid the possibility of entering a slippery slope of regulatory inquiry around the world—inquiry that could demand that companies like Facebook clearly define the redlines they will set for themselves, redlines that would then be discussed and debated by public intellectuals. And perhaps most importantly, in not moderating the president or, for that matter, taking him offline, platforms like Facebook are able to leave highly controversial and therefore engaging content online for continued consumption.

Over the past several years, it has served the business model of social media companies to avoid aggressive content moderation while doing as little as possible to preserve their political position. This keeps them relatively safe from regulators. They may put a pleasant framing of commitment to free speech on it, but the real reasons have been more venal.

But the calculus for the industry shifted quite suddenly—and dramatically—in recent months. Trump publicly adopted increasingly extreme and verifiably false views—most notably, about mail-in ballots, the state of the COVID-19 pandemic, the security of voting, and finally the result of the election itself. The increased harm done by his speech put increasing pressure on the likes of Twitter and Facebook—even in cases where he did not necessarily post the questionable views on his accounts on those platforms. Tech firms were pushed into the most difficult of corners: Leave Trump up in the name of free speech, or bring him down in the interests of democratic process in the United States?

As is often the case with progressive issues, Twitter was first to “do the right thing”; shortly after the Capitol assault, the company suspended his account for 12 hours, which was closely followed by a 12-hour ban on Facebook as well. Facebook soon after announced a two-week ban of Trump, enough to cover the company through Biden’s inauguration, noting also that the ban could be extended for an indefinite amount of time thereafter—which, the next day, was followed by a permanent ban of Trump’s account by Twitter.

Facebook has yet to take the same action, but in the meantime, a slew of other brand-name internet and technology firms have followed Twitter’s lead and moderated Trump in one form or another, including YouTube, Instagram, Twitch, Snapchat, PayPal, Shopify, TikTok, Reddit, Discord, Apple, Google Play, Amazon, Stripe, and Airbnb. Corporations have increasingly broken with Trump and the extreme faction of his extended establishment, especially those that enjoy public-facing, global brand names.

The clear signal this sends is that these companies’ actions are driven by their own commercial interests combined with their vulnerability to potential political blowback, not the democratic interest. This is not a surprising occurrence; the U.S. national economic design is one that favors profit-seeking firms, and the firms in question are acting exclusively in their corporate interests.

But this is not an ideal set of circumstances for anyone. For the companies in question, there is constant public scrutiny over content moderation decision-making until and unless the responsibility for takedown adjudications can be pushed to external third parties that have the public’s trust. Meanwhile, many of the users of these platforms are progressively subjected to hateful and conspiratorial content that, while highly engaging for many, drives irrational political polarization that can be manifested in the real world as hatred, as we witnessed earlier this month.

The United States badly needs regulatory changes, particularly to Section 230 of the Communications Decency Act, in a manner that can institute the right incentives for companies to act in the public interest without forcing the government to get directly involved in the decision-making process over which kinds of content should be deemed socially unacceptable and as such taken down by the companies. Biden has in the past suggested that Section 230 should be revoked—and such a reform might indeed go a long way in resolving the harms we have witnessed online in recent years; revocation would place the onus on internet platforms to minimize corporate liability over user-generated content by aggressively moderating perceived forms of illegal content. Nevertheless, many have pushed back against such proposals because of possible impediments to free expression online.

Should Biden see resistance to the idea of revocation, he will also be well positioned to consider a structured set of reforms that perhaps employ such schemes as a quid pro quo to companies: the liability shield of Section 230 in exchange for compliance to certain practices such as transparency, capacity investment, reporting to government, and independent civil rights audits, for instance. He can also consider the carve-out approach that was used in the FOSTA-SESTA context, and apply a similar technique to other forms of offending content like disinformation and hate speech such that they are exempted from the liability shield—though FOSTA-SESTA itself has not been received well by experts.

The options are numerous, and a set of experts with good ideas are willing and able to assist the administration in tackling the challenge. The key, in the end, will be to do what’s necessary to ensure that no single channel can be abused to the extent of threatening to topple a democracy.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.