Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn, Sharon Goldman, Kali Hays

Mira Murati’s surprise exit from OpenAI caps a chaotic year of staff infighting, rushed product launches, and executive upheaval

(Credit: Stefano Guidi/Getty Images)

When OpenAI debuted its latest AI model, GPT4-o, in a slick live webcast this past May, it was Mira Murati, the company’s chief technology officer, not the company’s better-known CEO, Sam Altman, who emceed the event. 

Murati, dressed in a gray collared T-shirt and jeans, helped show off the software’s ability to interact through voice and image prompts, as well as its skill at simultaneous translation, math, and coding. It was an impressive live demo, timed to steal the thunder from OpenAI’s rival Google, which was set to unveil new features for its own AI chatbot, Gemini, later that week.

But behind the scenes, things were far from smooth, according to sources familiar with the company’s internal workings. Relentless pressure to introduce products such as GPT-4o, and a newer model, called o1, which debuted last month, were straining the abilities of OpenAI’s research and safety teams to keep pace. There was friction between teams dedicated to ensuring OpenAI’s products did not pose undue risks, such as the ability to assist with producing biological weapons, and commercial teams dedicated to getting new products into the market and making money.

Many OpenAI staff thought that o1 was not ready to be unveiled, but Altman pushed to launch it anyway to burnish OpenAI’s reputation as a leader in AI. This is the first time details of this debate over o1 have been reported publicly.

Murati was often stuck in the middle of arguments between the research and technical teams she oversaw and the commercial teams that were eager to push products out. She was also having to deal with complaints about the leadership style of OpenAI cofounder and president Greg Brockman, and lingering resentment over the role she played in the brief ouster of Altman in a boardroom coup in November 2023. 

These pressures seem to have taken a toll on Murati, who on Wednesday announced she was leaving OpenAI after six years, setting off shock waves across the tech industry. She had played such a critical role in OpenAI’s meteoric rise that many speculated about the potential damage of her exit to the company’s future. A spokesperson for Murati disputed this account, saying burnout or issues related to management struggles did not factor in her decision to leave.

The same day Murati said she would be going, two other senior staffers, Bob McGrew, chief research officer, and Barret Zoph, vice president of research who had appeared along with Murati in the GPT4-o launch webcast, also announced they were stepping down. From the outside, it looked as though the company was once again at risk of imploding at a time when it wanted to sell investors on the idea that it was maturing into a more stable, if still fast-growing, organization.

Altman, in a note to staff that he later posted to social media platform X, tried to quell the firestorm by saying the three had reached their decisions to leave “independently and amicably.” The company had decided to announce the departures on the same day, he said, to allow for “a smooth handover to the next generation of leadership.” He acknowledged that such transitions are not normally “so abrupt,” but noted that “we are not a normal company.”

The three senior resignations add to a growing list of long-serving researchers and staff members who have left OpenAI in the past six months. Some of them have openly expressed concern at how the company’s culture and priorities have changed amid its rapid growth from a small nonprofit lab dedicated to ensuring superpowerful AI is developed for “the benefit of all humanity,” to a for-profit company expanding at a breakneck pace. 

In just nine years since its founding, OpenAI has pushed to the forefront of the AI revolution, striking fear into Big Tech companies like Google, Facebook parent Meta, and Amazon that are racing to dominate the latest technology. But OpenAI has also been convulsed by infighting and executive upheaval that threaten to slow the company just as its rivals gain momentum.

The latest chaos comes as OpenAI pushes to achieve three critically important business milestones. The company, which collects billions of dollars annually from the paid versions of its AI models, is trying to reduce the reported billions of dollars it’s also losing due to high staff costs and the cost of the computing power needed to train and run its AI products.

At the same time, OpenAI is trying to complete a new venture funding round, raising as much as $7 billion in a deal that would value the company at $150 billion, making it among the most highly valued tech startups in history. The internal drama may frighten away investors, potentially leading to a lower valuation.

In part to make itself more attractive to investors, Altman has told staff that the company plans to revamp its current convoluted corporate structure, which has a nonprofit foundation controlling its for-profit arm. In the new arrangement, the for-profit would no longer be controlled by the nonprofit.

The tensions among OpenAI’s staff flared most publicly in November 2023, when OpenAI’s nonprofit board abruptly fired Altman, saying he had “not been completely candid” with its members. Altman’s firing precipitated five days of spectacle, ending with Altman’s rehiring and the resignation of several board members. An outside law firm later engaged by a newly constituted board to investigate Altman’s firing concluded that there had been a fundamental breakdown in trust between the board and Altman, but that none of his actions mandated his removal. OpenAI staffers internally refer to the episode as “the Blip” and hoped to assure the outside world that the drama was behind them.

But Murati’s resignation alongside those of McGrew and Zoph indicate that the tensions that have roiled OpenAI may be far from fully quieted.

The reporting for this article is based on interviews with current and former OpenAI employees and people familiar with its operations. They all requested anonymity for fear of violating nondisclosure clauses in employment or exit agreements.

An OpenAI spokesperson said the company disagreed with many of the characterizations in this article but did “recognize that evolving from an unknown research lab into a global company that delivers advanced AI research to hundreds of millions of people in just two years requires growth and adaptation.”

Rushed development and safety testing

Amid intense competition with rival AI companies and pressure to prove its technological leadership to potential investors, OpenAI has rushed the testing and rollout of its latest and most powerful AI models, according to sources familiar with the process behind both launches.

The GPT4-o launch was "unusually chaotic even by OpenAI standards,” one source said, adding that “everyone hated the process." Teams were given only nine days to conduct safety assessments of the model, forcing them to work 20-hour-plus days to meet the launch date, according to another source familiar with the rollout. “It was definitely rushed,” this source said.

Safety teams pleaded with Murati for more time, the source said. In the past Murati had sometimes intervened, over the objections of OpenAI’s commercial teams, to delay model debuts when safety teams said they needed more time for testing. But in this case, Murati said the timing could not be changed.

That deadline was largely dictated by another company: Google. OpenAI, the sources said, knew Google was planning to introduce a host of new AI features, and tease a powerful new AI model with similar capabilities to GPT-4o, at its Google I/O developer conference on May 14. OpenAI desperately wanted to steal attention away from those announcements and derail any narrative that Google was pulling ahead in the race to develop ever more capable AI models. So it scheduled its live debut of GPT-4o for May 13.

In the end, OpenAI’s safety testing was not entirely concluded or verified by the time of the launch, but the safety team’s preliminary assessment was that GPT-4o was safe to launch. Later, however, the source said, a safety researcher discovered that GPT-4o seemed to exceed OpenAI’s own safety threshold for “persuasion”—how effective the model is at convincing people to change their views or perform some task. This is important because persuasive models could be used for nefarious purposes, such as convincing someone to believe in a conspiracy theory or vote a certain way.

The company has said it will not release any model that presents a “high” or “critical” risk on any safety metric. In this case, OpenAI had initially said GPT-4o presented a “medium” risk. Now the researcher told others internally that GPT-4o might actually present a “high” risk on the persuasion metric, according to the source.

 OpenAI denies this characterization and says its release timelines are determined only by its own internal safety and readiness processes. An OpenAI spokesperson said that the company “followed a deliberate and empirical safety process” for the release of GPT-4o and that the model “was determined safe to deploy” with a “medium” rating for persuasion. The company said higher ratings for the model found after it rolled out were the result of a methodological flaw in those tests and did not reflect GPT-4o’s actual risk. It noted that since the model’s debut in May, “it has been safely used by hundreds of millions of people,” giving the company “confidence in its risk assessment.”

Aspects of this timeline were reported previously by the Wall Street Journal, but were confirmed independently to Fortune from multiple sources.

In addition, Fortune has learned that similar issues surrounded the debut of OpenAI’s latest model, o1, which was unveiled on September 12, generating considerable buzz. It’s more capable at tasks that require reasoning, logic, and math than prior AI models and is seen by some as another significant step toward AGI, or artificial general intelligence. This milestone, long the Holy Grail of computer scientists, refers to an AI system matching typical human cognitive abilities. AGI is OpenAI’s stated goal.

But the launch of the model—which so far is only publicly available in a less capable “preview” version as well as a “mini” version tailored to math and coding—caused friction among the company’s senior executives, one source familiar with the matter said. Some OpenAI staffers felt the model was both too inconsistent in its performance and not safe enough to release, the source said.

In addition, Woijceich Zarembra, an OpenAI researcher and cofounder, in a post on X in response to the departures of Murati, McGrew, and Zoph, alluded to clashes among staff earlier in the development of o1. He acknowledged that he and Zoph had “a fierce conflict about compute for what later became o1,” but he did not elaborate on the specifics of the argument.

After o1 was trained, Altman pushed for the model to be released as a product almost immediately. The CEO insisted on a speedy rollout, the source who spoke to Fortune said, largely because he was eager to prove to potential investors in the company’s latest funding round that OpenAI remains at the forefront of AI development, with AI models more capable than those of its competitors.

Many teams who reported to Murati were convinced o1 was not ready to be released and had not really been shaped into a product. Their objections, however, were overruled, the source said.

Even now a team known as “post-training”—which takes a model once it has been trained and finds ways to fine-tune its responses to be both more helpful and safer—was continuing to refine o1, the source said. Normally, this step is completed before a model’s premiere. Zoph, who announced he was leaving on the same day as Murati, had been in charge of the post-training team.

In response to Fortune’s questions about o1, OpenAI provided a statement from Zico Kolter, a Carnegie Mellon University computer scientist, and Paul Nakasone, a former director of the U.S. National Security Agency, who serve on the Safety and Security Oversight Committee of OpenAI’s board. Kolter and Nakasone said the o1 model release “demonstrated rigorous evaluations and safety mitigations the company implements at every step of model development” and that o1 had been delivered safely.

A spokesperson for Murati denied that stress related to the headlong rush to get models released or management headaches played any role in her decision to resign. “She worked with her teams closely and, coming off big wins, felt it was the right time to step away,” the spokesperson said.

Lingering questions about Murati’s loyalty

Conflicts over the release of AI models may not have been the only source of tension between Altman and Murati. Some sources familiar with the company said among OpenAI employees there was speculation Altman viewed her as disloyal due to her role in his brief ouster during “the Blip.”

Among OpenAI staff, suspicions about Murati’s loyalty surfaced almost immediately after Altman’s firing in November, one source said. For one thing, the old board had appointed Murati as interim CEO. Then, at an all-hands meeting the day of Altman’s dismissal, shocked employees quizzed senior staff on when they first learned of the board’s decision. All of the executives said they had only found out when the decision was announced—except Murati. She said the board had informed her 12 hours before the decision was announced. Stunned, some Altman loyalists thought she should have warned Altman, who was surprised by his ouster.

At the meeting, Murati told employees she would work with the rest of the company’s executives to push for Altman’s reinstatement. Days later, the board replaced her as interim CEO with veteran tech executive Emmett Shear. Murati resumed her role as CTO when Altman returned as CEO, five days after his firing.

Murati may have played a larger role in Altman’s brief ouster, however, than she has let on. The New York Times reported in March that Murati had written a memo to Altman expressing concerns about his leadership style and that she later brought these concerns to OpenAI’s board. It also reported that her conversations with the board had factored into their decision to fire Altman.

Murati, through a lawyer, issued a statement to the newspaper denying that she had reached out to OpenAI’s board in an effort to get him fired. But she later told staff that when individual board members had contacted her, she provided “feedback” about Altman, adding that Altman was already aware of her views since “I have not been shy about sharing feedback with him directly.” She also said at the time that she and Altman “had a strong and productive partnership.”

Despite her assurances, within OpenAI some questioned whether Murati’s relationship with Altman had been irrevocably damaged, sources said. Others speculated that investors who are pouring billions into OpenAI did not like the idea of a senior executive whose loyalty to Altman was less than total.

Brockman ruffled feathers, Murati had to clean up the mess

Over the past year, Murati also had to deal with clashes between Brockman, the OpenAI cofounder and president, and some of the teams that reported to her. Brockman, a notorious workaholic, had no specific portfolio, sources said. Instead, he tended to throw himself into a variety of projects, sometimes uninvited and at the last minute. In other cases, he pushed staff to work at his own brutal pace, sometimes messaging them at all hours of the day and night.

Brockman’s behavior riled staffers on those teams, with Murati often brought in to either try to rein in Brockman or soothe bruised egos, sources said.

Eventually, Brockman was “voluntold” to take a sabbatical, one source said. The Wall Street Journal reported that Altman met with Brockman and told him to take time off. On August 6, Brockman announced on social media that he was “taking a sabbatical” and that it was his “first time to relax since cofounding OpenAI 9 years ago.”

An OpenAI spokesperson said Brockman’s decision was entirely his own and reiterated that he would be returning to the company “at the end of the year.” 

Brockman may also not have been the only personnel headache that weighed on Murati and those who reported to her. In his post on X, Zarembra also wrote that he had been reprimanded by McGrew “for doing a jacuzzi with a coworker,” setting off a speculation on social media about a corporate culture with few limits. Zarembra did not provide additional details.

A spokesperson for Murati emphasized that managing staff conflicts is a typical part of a senior executive’s job and that burnout or stress in no way factored in Murati’s decision to resign. “Her rationale for leaving is, after a successful transition, to turn her full attention and energies toward her exploration and whatever comes next,” the spokesperson said.

A shifting culture

Brockman’s sabbatical and Murati’s departure are part of a widespread transformation of OpenAI in the past 10 months since Altman’s brief ouster. In addition to the senior executives who announced their departures last week, Ilya Sutskever, an OpenAI cofounder and its former chief scientist, left the company in May. 

Sutskever had been on OpenAI’s board and had voted to fire Altman. Following Altman’s rehiring, Sutskever—who had been leading the company’s efforts to research ways to control future powerful AI systems that might be smarter than all humans combined—never returned to work at the company. He has since founded his own AI startup, Safe Super Intelligence.

After Sutskever’s departure, Jan Leike, another senior AI researcher who had coheaded the so-called “superalignment” team with Sutskever, also announced he was leaving to join OpenAI rival Anthropic. In a post on X, Leike took a parting shot at OpenAI for, in his view, increasingly prioritizing “shiny products” over AI safety. 

John Schulman, another OpenAI cofounder, also left in August to join Anthropic, saying he wanted to focus on AI safety research and “hands on technical work.” In the past six months, nearly half of OpenAI’s safety researchers have also resigned, raising questions about the company’s commitment to safety.

Meanwhile, OpenAI has been hiring at a frenetic pace. During Altman’s brief ouster in November, OpenAI staff who wanted to signal their support for his return took to X to post the phrase “OpenAI is nothing without its people.” But increasingly, OpenAI’s people aren’t the same ones who were at the company at the time.

OpenAI has more than doubled in size since “the Blip,” going from fewer than 800 employees to close to 1,800. Many of those who have been hired have come from big technology companies or conventional fast-growing startups as opposed to the niche fields of AI research from which OpenAI traditionally drew many of its employees.

The company now has many more employees from commercial fields such as product management, sales, risk, and developer relations. These people may not be as motivated by the quest to develop safe AGI as they are by the chance to create products that people use today and by the chance for a giant payday.

The influx of new hires has changed the atmosphere at OpenAI, one source said. There are fewer “conversations about research, more conversations about product or deployment into society.”

At least some former employees seem distressed at Altman’s plans to revamp its corporate structure so that the company’s for-profit arm is no longer controlled by OpenAI’s nonprofit foundation. The changes would also remove the existing limits, or cap, on how much OpenAI’s investors can earn. 

Gretchen Kreuger, a former OpenAI policy researcher who resigned in May, posted on X on September 29 that she was concerned about the proposed changes to OpenAI’s structure.

“OpenAI’s nonprofit governance and profit cap are part of why I joined in 2019,” Kreuger wrote, adding that removing the caps and freeing the for-profit from oversight by the nonprofit “feels like a step in the wrong direction, when what we need is multiple steps in the right direction.”

Some current employees, however, say they are not unhappy about the changes. They hope internal conflict will be more infrequent now that colleagues whom they call too academic or overly worried about AI safety, and therefore reluctant to ever release products, are increasingly choosing to leave. 

Altman told staff in the memo that he would, for the time being, oversee OpenAI’s technical staff personally. In the past, Altman had been primarily focused on public speaking, meeting with government officials, key partnerships, and fundraising while leaving technical developments to other executives.

Of OpenAI’s 11 original cofounders, only Altman and Zarembra remain at the company full-time. Zarembra, in his post on the departure of Murati, McGrew, and Zoph, noted that Altman, “for all his shortcomings and mistakes, has created an incredible organization.” Now Altman will have to prove that organization has the depth of talent to continue to succeed without many of the senior leaders who helped him get to this point—and without further drama.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.