Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Alex Hern

TechScape: I read Elon Musk’s ‘Twitter Files’ so you don’t have to

An image of Elon Musk is seen on a smartphone placed on printed Twitter logos in this picture illustration taken April 28, 2022.
Elon Musk is pushing “the Twitter Files” to re-litigate the company’s role in the culture wars of years past. Photograph: Dado Ruvić/Reuters

The threat model of a social network is complex. Your security team has to deal with conventional hacking attacks, as hostile actors probe for technical errors in your apps and servers that they can use to extract valuable private data, inject malicious code, or simply wreak havoc for fun.

They also have to deal with people using the site’s own capabilities in destructive ways, from simple-minded spam bots through to nation states carrying out “coordinated inauthentic behaviour”. They have to protect users from account takeovers due to password theft, and they have to do it all while navigating the minefield that is content moderation.

And then the site gets bought on a whim by a capricious billionaire and the threat comes from inside the house.

What are the Twitter Files?

Elon Musk has been pushing “the Twitter Files”, a series of Twitter threads from friendly journalists using material provided by the company to re-litigate the company’s role in the culture wars of years past.

Typically, big news stories claiming to be the [something] “files” are based on enormous leaks, providing a hitherto impossible look at the inner workings of the organisation under the microscope. It is less typical for an enormous leak to have been ordered by the chief executive of the company, and executed by their subordinates openly working with the journalists reporting on the story. But little about Elon Musk’s Twitter is typical.

What of the files themselves? A week and a half in, there have been four releases, from three writers: Matt Taibbi, Bari Weiss and Michael Shellenberger, all broadly part of a wave of “post-liberal” Substack newsletter writers. It’s unclear how they were selected to receive the documents.

One requirement, however, was that everything they published be shared on Twitter itself, Taibbi has said, but beyond that “we’ve been encouraged to look not just at historical Twitter, but the current iteration as well. I was told flat-out I could write anything I wanted, including anything about the current company and its new chief, Elon Musk.” At the same time, the reporting was done inside Twitter’s offices, with the assistance of Twitter staff.

And all three have focused on the areas that one might guess, given their previous statements about the social network. Taibbi’s first thread covered Twitter’s efforts to respond to the New York Post’s story about Hunter Biden’s laptop; his second thread, as well as Shellenberger’s, looked at the events around the suspension of Donald Trump and the January 6 attack on the US Capitol. Weiss, meanwhile, reported on what she described as “Twitter’s secret blacklists”.‌

Freedom of speech v ‘freedom of reach’

The bulk of the material so far, then, has been focused on what are effectively two extremely high-stakes individual moderation decisions, one widely regarded as an error in hindsight (hiding stories about Biden’s laptop), and the other exactly as divisive as anyone would have guessed beforehand (Trump’s ban).

The documents shared by Taibbi and Shellenberger largely support that reading. Shorn of the conspiratorial framing of the two writers, the excerpts of internal emails and chat messages that they have posted appear to show staff tackling the enormous burden that has been placed on them with a rough mixture of panic and resolve.

In the days following the publication of the Post’s story on Hunter Biden, that wasn’t enough. The executives quoted by Taibbi are clearly aware of the prospect of a repeat of 2016’s WikiLeaks dump of hacked Democratic Party documents, and move quickly to discuss enforcing a policy against sharing hacked materials. But the materials weren’t hacked, and while the chain of custody of Biden’s laptop remains murky, it rapidly became clear that the policy was wrongly applied. Twitter’s most senior staff were too slow removing the ban as that became clear, and, in Taibbi’s words, “erred on the side of … continuing to err”.

Just two months later, the same group was convened to discuss Donald Trump. The president had used social media to egg on a protest in Washington DC that turned violent as his supporters decided to storm the US Capitol building. In the days since the election, Twitter had been aggressively applying its “newsworthiness” policy, slapping a label on posts that would be deleted were it not for the prominence of their author, but by 7 January, it was clear that was unsatisfactory in the case of Trump.

A series of Slack posts shared by Shellenberger show the team, led by former trust and safety head Yoel Roth, desperately trying to invent policy on the fly (all staff except Roth are anonymised in the posts Shellenberger shared). Trump had been given special treatment, left on the site for months after a typical user would have seen his account deleted: at what point does that approach cease to be viable? The answer was clearly January 6. But if you give someone special treatment without admitting as such, it just makes it all the more difficult to take it away.

Weiss’s part of the saga is different. Rather than focusing on the narrow world of US electoral politics, her thread takes a more systemic look at Twitter’s moderation practices. Working with Ella Irwin, a trust and safety staffer at Twitter, Weiss published screenshots of the moderation pages for some of the site’s most notorious users.

Jay Bhattacharya, a Covid sceptic, was placed on a “trends blacklist”; Dan Bongino, a rightwing media personality, on a “search blacklist”; Charlie Kirk, whose decision to attend a protest wearing a diaper caused such secondhand embarrassment it effectively destroyed the Republican youth movement he founded, was set to “do not amplify”.

The tags are various examples of what Twitter calls “visibility filtering”, a form of moderation that is intended to impact “freedom of reach” without affecting “freedom of speech”. Users who are filtered can post what they like, but their involvement in the algorithmic amplification of the site is limited. Some won’t show up in search results or trends; others won’t be recommended to users to follow. The most aggressive form of visibility filtering, which didn’t affect any of the prominent accounts Weiss highlighted, means that new posts won’t even show up for followers, and are only visible to people who navigate directly to the poster’s profile.

Not every user Weiss examined was hurt by the moderation. One, LibsofTikTok, was given special treatment, warning moderators to do nothing to it without consulting the site’s senior team. Despite that, it had still received two strikes for abuse, and been placed on the trends blacklist.

What did we learn?

I think it’s important to distinguish between the Twitter files, and the “Twitter Files”. The latter, a big, hyped, coordinated publication, has so far failed to achieve its apparent goals. The throughline of the whole exercise is that Twitter is a hotbed of leftwing bias, explicitly aligned with the US Democratic party, and taking unwarranted action to censor speech for politically motivated purposes.

The posts themselves show little of the sort. Some, like Weiss’s, don’t even attempt to: individual examples of rightwing users being on the end of light-touch moderation says little about overall bias. Were leftwing users also given visibility filters? Weiss doesn’t say. Were rightwing users given filters more? Weiss doesn’t say.

Others show almost the opposite. There were plenty of easy reasons to remove Donald Trump from the social network in January 2021, but the posts seem to reveal Twitter staff methodically working through their actual rulebook, trying to understand how to react to unprecedented events in a way that doesn’t simply throw precedent out the window.

Like so much to do with American politics, the Files fall flat if you view the American right as an outlier. If you have rules against election misinformation and only one party engages in a systematic campaign of election misinformation, it’s not an unreasonable outcome for one party to be the focus of moderation efforts.

But the lowercase files, the documents themselves, are an interesting historical artefact nonetheless. They show that, at periods of global crisis, the people making the decisions inside Twitter were acutely aware of, and uncomfortable with, the power they held. Even as a set of cherrypicked examples, they show that efforts to create and apply a consistent rulebook were driven as much by a desire to avoid criticism as a belief that doing so was important for protecting users. They give us an insight into the sorts of discussions that were likely happening at Facebook and YouTube at the same time.

And they show us never to trust Elon Musk.

Insider threat

Musk has been promoting the series as an exercise in “transparency”, and, if you’re Weiss, Taibbi or Shellenberger, that’s what it is. But it’s the sort of transparency that companies get when their database is hacked and sold on the darknet. In this case, the database cost $44bn, and came with control of the site to boot.

Marcus Hutchins, the ethical hacker who stopped the WannaCry ransomware infection, posted on Mastodon about the docs. “As a security professional, not much scares me,” he said. “I’ve seen my personal data stolen numerous times, watched nationstate hackers spray zerodays across the internet, and I’m a shameless user of TikTok.

“But now you have someone sitting on top of the personal data of several billion users, someone who has a long track record of vindictive harassment, someone who has the ear of the far right, and someone who has just shown us his willingness to weaponise internal company data to score political points. That scares me a lot.”

The Shellenberger posts named only one person: Twitter’s former head of trust and safety Yael Roth. When Musk bought the company, Roth was initially welcoming: one of the few staffers who was prepared to advocate for his boss publicly, and a much-needed source of internal expertise after the immediate sacking of Vijaya Gadde, the longtime head of Twitter’s platform safety efforts.

But the relationship clearly soured. On 10 November, Roth quit, resurfacing a week later to write a New York Times post arguing that “even as he criticizes the capriciousness of platform policies, [Musk] perpetuates the same lack of legitimacy through his impulsive changes and tweet-length pronouncements about Twitter’s rules.”

In doing so, he seems to have become a bete noire for his brief boss, and so for the wider rightwing media ecosystem that Musk now conducts. The day before Shellenberger shared his part of the Twitter Files, Musk posted an out-of-context excerpt of Roth’s decades-old PhD thesis, which looked at whether services like Grindr were causing harm by forcing teenagers to pretend to be adults in order to access dating sites.

To an audience of hundreds of millions, Musk accused Roth of being “in favour of children being able to access adult internet services”, and indirectly accused him of personally deciding to make Twitter a safe space for paedophiles.

The accusation is nonsense, but the charge, in an atmosphere of rightwing panic over “groomers” in the media, is life-changing. On Monday, Roth and his partner were forced to flee their home after a sharp increase in credible threats against him. For insufficient loyalty, he will have to spend the rest of his life checking over his shoulder. What will happen to the next person who annoys Elon?

If you want to read the complete version of the newsletter please subscribe to receive TechScape in your inbox every Tuesday

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.