Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Rachyl Jones

Elon Musk and Linda Yaccarino face first global crisis on X with the Israel-Hamas war. All signs point to it being a dumpster fire

(Credit: Nathan Howard/Getty Images)

Elon Musk argues that disturbing content on X, the platform formerly known as Twitter, should be seen, even if it is “difficult.” As for the rampant spread of disinformation on the social media service—well, he says, people can decide for themselves what is true. 

With the Israel-Hamas war, Musk and X CEO Linda Yaccarino are navigating the first new major outbreak of global violence and disinformation on X since Musk bought the platform last year. With no trust and safety head, and relaxed rules on policing content, the executives are facing intense criticism over how X is handling the crisis. 

Photos and videos posted on the platform related to Israel depict mass shootings, kidnappings, and anti-Semitic hate speech, according to an analysis by Politico published on Monday. Disturbing media has also surfaced on other social media platforms, but the extent of terrorist-related media on X is comparatively greater, according to Politico’s analysis. 

Fortune observed similar content on Wednesday, including images and videos from Israel appearing to show hostages, individuals being beaten, and dead bodies. X allows users to change their settings to display or restrict graphic content, but when tested by Fortune on multiple accounts, toggling this feature was only available on the desktop version of X, not on the iOS mobile app. 

X has long been criticized for loosening its rules on what can be posted, but this is the first significant new world event under Musk’s ownership in which its policies have been put to the test. Musk has already received a warning from the EU to clean up its service. In a letter posted online on Tuesday, EU Commissioner Thierry Breton told Musk about “potentially illegal content” regarding the Israel-Hamas war circulating on X “despite flags from relevant authorities.” Breton gave Musk 24 hours—until Wednesday afternoon—to make sure its content moderation systems are effective and to respond with details about the measures the company is taking. 

But Musk’s reply didn’t show much contrition. He responded that he’s not in favor of “back room deals” and asked Breton to share a list of the alleged violations on the platform. On Wednesday, Breton issued a similar warning to Mark Zuckerberg regarding Meta, owner of Facebook and Instagram.

Since Musk’s acquisition, X has been widely panned for its policing of content including anti-Semitic posts. For example, the Center for Countering Digital Hate published a report last month that said X often failed to remove content that violate its guidelines after receiving notice, such as posts including Holocaust denial, extreme hate speech, racist caricatures, and conspiracy theories against Jews.

X didn’t respond to Fortune’s request for comment.  

In the year since buying the social media company, Musk drove away two heads of trust and safety—Yoel Roth in November and Ella Irwin in June. While the company is reportedly looking for a new trust and safety leader, the current efforts are largely being managed by Musk and Yaccarino—neither of whom has extensive experience in content moderation. 

On Monday, Yaccarino pulled out of speaking at the WSJ Tech Live conference next week, citing the Israel-Hamas war. “With the global crisis unfolding, Linda and her team must remain fully focused on X platform safety," X said about the withdrawal. Two weeks prior, Yaccarino appeared at another conference and gave a much-derided performance

X’s safety team itself has faced additional losses of personnel. The company has run through two heads of brand safety and ad quality who are responsible for ensuring ads don’t appear next to, say, terrorist content. The department has also experienced multiple rounds of layoffs in the last year, the most recent being just last month, Business Insider reported. 

Meanwhile, pre-Musk Twitter had supported an additional Trust and Safety Council, a group of 100 volunteer civil and human rights groups that advised the company on online safety. Musk dissolved the group shortly after taking over.

X’s content moderation plan

In recent days, after the attack by Hamas on Israel and under fire for the flood of violent posts, Musk and Yaccarino reiterated their loose approach to graphic content. “We know that it's sometimes incredibly difficult to see certain content, especially in moments like the one unfolding,” the company posted on Monday. “In these situations, X believes that, while difficult, it's in the public's interest to understand what's happening in real time.” 

Jewish groups, schools, and other organizations have recoiled at X’s policies since the outbreak of the war. Many have suggested that parents disable social media apps, including X, on their children’s phones over fears they will be exposed to troubling images and videos.

“Graphic and misleading information is flowing freely [on social media], augmenting the fears of our students,” said a private Jewish day school in New Jersey, the Wall Street Journal reported

The extent to which X’s algorithm is promoting this content is still a question. Ian Bremmer, a prominent political scientist who leads the geopolitical research organization Eurasia Group, said the level of algorithmically promoted disinformation on X is “unlike anything I’ve ever been exposed to in my career.” 

X has limited some posts from being spread online, but some say it's not enough. The Anti-Defamation League, the nonprofit fighting anti-Semitism that Musk threatened to sue last month for allegedly defaming his company and driving away advertisers, criticized X on Monday for its response to a post from Ayatollah Khamenei, the supreme leader of Iran. Khamenei’s post, in which he expresses a desire for the eradication of Israel, had received 9.9 million views as of Wednesday.

“X acknowledged that the Supreme Leader of Iran posted something that violated its policies, and subsequently limited the visibility of the post but kept it accessible, arguing it was in the public interest,” the ADL wrote in a blog post. “We believe in this case that Ayatollah Khamenei’s post only serves to inflame the potential for further violence and that there is no legitimate reason to keep it accessible. ” 

Musk's company has taken some action by removing newly created Hamas-affiliated accounts and “tens of thousands of posts,” X said. It is also enrolling new Community Notes moderators, or users who can add context and fact-checks to existing posts. On the question of disinformation, Musk posted that fact-checkers aren’t always trustworthy, adding, “Let the public hear exactly what this disinformation consists of and decide for themselves.” 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.