Just six months ago, OpenAI was a model of unity as hundreds of employees and managers threatened to quit the startup en masse unless cofounder Sam Altman was rehired as CEO. The gambit led to Altman’s return just days after he was fired by OpenAI’s board—a dramatic display of the power of an organization marching in lockstep, aligned around a common mission.
Today, that company looks like a distant memory. A series of scandals have tarnished OpenAI’s image and several prominent insiders have quit, accusing the company of abandoning its principles.
The chaotic recent events reveal a company that suddenly appears to be on the brink of a civil war as it grapples with a crisis over leadership and direction. And as the internal crisis continues to unfold, the company that for years has set the pace of generative AI is now engulfed in uncertainty about its own future.
“There is something endemic inside of OpenAI that is coming to the surface through one significant issue after another,” said Daniel Newman, principal analyst at the Futurum Group. “There’s this culture that has risen up out of super-hyper-growth, out of being the most important company in one of the biggest trends in history, and now there are little signs of stress fractures.”
The future of OpenAI is being closely watched not just because of the company’s high-profile stature as the creator of ChatGPT. OpenAI’s technology is at the heart of Microsoft’s AI products, as well as the foundation on top of which countless startups have built their AI strategy. Disruptions, and even just distractions, within OpenAI could have big ripple effects.
Some of OpenAI’s current woes may be the result of a clash of Silicon Valley cultures. Since the end of 2022, when OpenAI launched ChatGPT and became a household name, hundreds of new employees have joined the San Francisco startup. The new blood often have product, sales, and marketing backgrounds, to help OpenAI ramp up its business. It’s a stark contrast to the earlier employees who hail from AI research or safety communities and who joined the company when it was a noncommercial, open-source research lab focused squarely on a mission of reaching what it defines as AGI (artificial general intelligence).
Tech historians may well place the blame at the feet of its high-profile leader. Altman reclaimed his OpenAI throne last November, but in just the past couple of weeks has hardly been treated with kid gloves. Instead, he has undergone scrutiny for a laundry list of PR problems, including accusations from actor Scarlett Johansson about ChatGPT 4o’s “Sky” voice allegedly sounding like her (and Her); leaks regarding OpenAI’s aggressive tactics against former employees; and news that the company failed to provide its Superalignment team with promised compute.
Altman’s direct involvement in the recent string of departures from OpenAI is not clear. But given the nature of the exits, particularly those involving high-ranking members of the AI safety, policy, and governance teams, it seems difficult to imagine that the top-down tone set by the CEO is not connected in some way. Among the recent departures, OpenAI chief scientist and former board member Ilya Sutskever had led the board’s push last year to fire Altman for not being “consistently candid.” Then there was the resignation of Jan Leike, a longtime OpenAI researcher who, with Sutskever, co-led a team called Superalignment that focused on ensuring that future superhuman AI systems could be safely controlled. On his way out, Leike declared that safety, culture, and processes at OpenAI have taken a back seat to “shiny products.”
And in announcing her departure on Wednesday, an OpenAI policy researcher, Gretchen Krueger, suggested the company was sowing divisions among various OpenAI teams concerned with ethics, safety, and governance. “One of the ways tech companies in general can disempower those seeking to hold them accountable is to sow division among those raising concerns or challenging their power. I care deeply about preventing this,” she wrote.
More shoes to drop?
Is there is anyone left to push for safety at OpenAI in the wake of all that turnover? In her exit note, Krueger made clear there are many left at OpenAI working on policy and ethics issues including democratic inputs, expanding access, preparedness framework development, and confidence-building measures. And there are many dozens of OpenAI employees still working on AI safety issues. The question is whether they share the concerns of their departed colleagues, and if so, whether they feel empowered to speak out.
For Altman, who has spent the past year and a half as the biggest star in Silicon Valley’s generative AI show, there are other important constituencies to take into consideration as well. Public opinion is certain to play a big role in the future of AI and of industry leaders like OpenAI. Whether it’s creating a negative impression among consumers or arousing the scrutiny of regulators, OpenAI has a lot at stake on its reputation. And there’s OpenAI’s relationship with its biggest investor, Microsoft, and its CEO, Satya Nadella. The two CEOs have been described as frenemies, and Microsoft has taken steps to lessen its dependence on OpenAI by investing in other AI startups including Mistral and G42.
OpenAI has contended with internal disagreements in the past, of course. Anthropic, one of its main rivals, was created by OpenAI employees who struck out on their own as a result of concerns over safety issues.
Newman, the Futurum Group analyst, notes that Altman has been battle-tested. “He’s still seen as probably the steadiest hand to run the ship,” he explained, adding that some of the issues plaguing OpenAI are common in any hyper-growth startup. Still, with Microsoft showing signs of hedging its relationship with OpenAI, Newman says that the startup’s leading role in the industry is hardly assured: “As an outsider, you are starting to watch this, and going, ‘Could this thing implode?’”