Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Atlantic
The Atlantic
Technology
Rumman Chowdhury

I Watched Elon Musk Kill Twitter’s Culture From the Inside

Getty; The Atlantic

Everyone has an opinion about Elon Musk’s takeover of Twitter. I lived it. I saw firsthand the harms that can flow from unchecked power in tech. But it’s not too late to turn things around.

I joined Twitter in 2021 from Parity AI, a company I founded to identify and fix biases in algorithms used in a range of industries, including banking, education, and pharmaceuticals. It was hard to leave my company behind, but I believed in the mission: Twitter offered an opportunity to improve how millions of people around the world are seen and heard. I would lead the company’s efforts to develop more ethical and transparent approaches to artificial intelligence as the engineering director of the Machine Learning Ethics, Transparency, and Accountability (META) team.

In retrospect, it’s notable that the team existed at all. It was focused on community, public engagement, and accountability. We pushed the company to be better, providing ways for our leaders to prioritize more than revenue. Unsurprisingly, we were wiped out when Musk arrived.

He might not have seen the value in the type of work that META did. Take our investigation into Twitter’s automated image-crop feature. The tool was designed to automatically identify the most relevant subjects in an image when only a portion is visible in a user’s feed. If you posted a group photograph of your friends at the lake, it would zero in on faces rather than feet or shrubbery. It was a simple premise, but flawed: Users noticed that the tool seemed to favor white people over people of color in its crops. We decided to conduct a full audit, and there was indeed a small but statistically significant bias. When Twitter used AI to determine which portion of a large image to show on a user’s feed, it had a slight tendency to favor white people (and, additionally, to favor women). Our solution was straightforward: Image cropping wasn’t a function that needed to be automated, so Twitter disabled the algorithm.

I felt good about joining Twitter to help protect users, particularly people who already face broader discrimination, from algorithmic harms. But months into Musk’s takeover—a new era defined by feverish cost-cutting, lax content moderation, the abandonment of important features such as block lists, and a proliferation of technical problems that have meant the site couldn’t even stay online for the entire Super Bowl—it seems no one is keeping watch. A year and a half after our audit, Musk laid off employees dedicated to protecting users. (Many employees, including me, are pursuing arbitration in response.) He has installed a new head of trust and safety, Ella Irwin, who has a reputation for appeasing him. I worry that by ignoring the nuanced issue of algorithmic oversight—to such an extent that Musk reportedly demanded an overhaul of Twitter’s systems to display his tweets above all others—Twitter will perpetuate and augment issues of real-world biases, misinformation, and disinformation, and contribute to a volatile global political and social climate.

Irwin did not respond to a series of questions about layoffs, algorithmic oversight, and content moderation. A request to the company’s press email also went unanswered.

Granted, Twitter has never been perfect. Jack Dorsey’s distracted leadership across multiple companies kept him from defining a clear strategic direction for the platform. His short-tenured successor, Parag Agrawal, was well intentioned but ineffectual. Constant chaos and endless structuring and restructuring were ongoing internal jokes. Competing imperatives sometimes manifested in disagreements between those of us charged with protecting users and the team leading algorithmic personalization. Our mandate was to seek outcomes that kept people safe. Theirs was to drive up engagement and therefore revenue. The big takeaway: Ethics don’t always scale with short-term engagement.

A mentor once told me that my role was to be a truth teller. Sometimes that meant confronting leadership with uncomfortable realities. At Twitter, it meant pointing to revenue-enhancing methods (such as increased personalization) that would lead to ideological filter bubbles, open up methods of algorithmic bot manipulation, or inadvertently popularize misinformation. We worked on ways to improve our toxic-speech-identification algorithms so they would not discriminate against African-American Vernacular English as well as forms of reclaimed speech. All of this depended on rank-and-file employees. Messy as it was, Twitter sometimes seemed to function mostly on goodwill and the dedication of its staff. But it functioned.

Those days are over. From the announcement of Musk’s bid to the day he walked into the office holding a sink, I watched, horrified, as he slowly killed Twitter’s culture. Debate and constructive dissent was stifled on Slack, leaders accepted their fate or quietly resigned, and Twitter slowly shifted from being a company that cared about the people on the platform to a company that only cares about people as monetizable units. The few days I spent at Musk’s Twitter could best be described as a Lord of the Flies–like test of character as existing leadership crumbled, Musk’s cronies moved in, and his haphazard management—if it could be called that—instilled a sense of fear and confusion.

Unfortunately, Musk cannot simply be ignored. He has purchased a globally influential and politically powerful seat. We certainly don’t need to speculate on his thoughts about algorithmic ethics. He reportedly fired a top engineer earlier this month for suggesting that his engagement was waning because people were losing interest in him, rather than because of some kind of algorithmic interference. (Musk initially responded to the reporting about how his tweets are prioritized by posting an off-color meme, and today called the coverage “false.”) And his track record is far from inclusive: He has embraced far-right talking points, complained about the “woke mind virus,” and explicitly thrown in his lot with Donald Trump and Ye (formerly Kanye West).

Devaluing work on algorithmic biases could have disastrous consequences, especially because of how perniciously invisible yet pervasive these biases can become. As the arbiters of the so-called digital town square, algorithmic systems play a significant role in democratic discourse. In 2021, my team published a study showing that Twitter’s content-recommendation system amplified right-leaning posts in Canada, France, Japan, Spain, the United Kingdom, and the United States. Our analysis data covered the period right before the 2020 U.S. presidential election, identifying a moment in which social media was a crucial touch point of political information for millions. Currently, right-wing hate speech is able to flow on Twitter in places such as India and Brazil, where radicalized Jair Bolsonaro supporters staged a January 6–style coup attempt.

Musk’s Twitter is simply a further manifestation of how self-regulation by tech companies will never work, and it highlights the need for genuine oversight. We must equip a broad range of people with the tools to pressure companies into acknowledging and addressing uncomfortable truths about the AI they’re building. Things have to change.

My experience at Twitter left me with a clear sense of what can help. AI is often thought of as a black box or some otherworldly force, but it is code, like much else in tech. People can review it and change it. My team did it at Twitter for systems that we didn’t create; others could too, if they were allowed. The Algorithmic Accountability Act, the Platform Accountability and Transparency Act, and New York City’s Local Law 144—as well as the European Union’s Digital Services and AI Acts—all demonstrate how legislation could create a pathway for external parties to access source code and data to ensure compliance with antibias requirements. Companies would have to statistically prove that their algorithms are not harmful, in some cases allowing individuals from outside their companies an unprecedented level of access to conduct source-code audits, similar to the work my team was doing at Twitter.

After my team’s audit of the image-crop feature was published, Twitter recognized the need for constructive public feedback, so we hosted our first algorithmic-bias bounty. We made our code available and let outside data scientists dig in—they could earn cash for identifying biases that we’d missed. We had unique and creative responses from around the world and inspired similar programs at other organizations, including Stanford University.

Public bias bounties could be a standard part of algorithmic risk-assessment programs in companies. The National Institute of Standards and Technology, the U.S.-government entity that develops algorithmic-risk standards, has included validation exercises, such as bounties, as a part of its recommended algorithmic-ethics program in its latest AI Risk Management Framework. Bounty programs can be an informative way to incorporate structured public feedback into real-time algorithmic monitoring.

To meet the imperatives of addressing radicalization at the speed of technology, our approaches need to evolve as well. We need well-staffed and well-resourced teams working inside tech companies to ensure that algorithmic harms do not occur, but we also need legal protections and investment in external auditing methods. Tech companies will not police themselves, especially not with people like Musk in charge. We cannot assume—nor should we ever have assumed—that those in power aren’t also part of the problem.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.