It’s been another one of those weeks when the flow of AI news is at water cannon levels.
Arguably one of the most important relationships in AI at the moment is the partnership between Microsoft and OpenAI. Microsoft has made OpenAI’s technology central to its AI offerings, including all of its “Copilot” branded AI features—in Office, in its GitHub Copilot coding assistant, and its consumer-facing Copilot personal assistant, and in a new on-PC Copilot announced yesterday on the eve of its Build developer conference. Microsoft also offers OpenAI’s models on its Azure cloud computing platform, and it's by most accounts through these offerings that most AI applications companies have put into full production have been built. And it's through Microsoft’s products or services that most of us will interact with generative AI at work, at school, and at home.
Of course, Microsoft is probably unhappy with OpenAI right now. That’s because the San Francisco startup is once again engulfed in drama that has eclipsed coverage of all of Microsoft’s big AI announcements at Build.
The drama, as you’ve probably heard by now, involves whether OpenAI intentionally used a voice similar to actress Scarlett Johansson’s for its new voiced-based AI chatbot GPT-4o. OpenAI had claimed any similarity between the voice, which it calls Sky, and Johansson’s was purely coincidental.
Johansson has publicly questioned OpenAI’s account, saying that OpenAI CEO Sam Altman had reached out to her in September, shortly before the company debuted Sky, and asked for permission to use her voice. Johansson declined. Then Altman contacted her again to see if she would change her mind just before the public demo of GPT-4o that used the Sky voice. Before Johansson could respond, OpenAI had debuted the new model.
The timing of those contacts certainly sounds suspicious. (Altman is known to be a fan of the movie Her in which Johansson provided the voice of Samantha, the AI assistant depicted in the film. Additionally, during the GPT4-o demo, Altman posted the word “her” on X in a clear reference to the movie.) Johansson says she has asked her lawyers to try to get to the bottom of whether OpenAI used her voice in any way without her permission. Meanwhile, Altman said in a statement that OpenAI has “paused” access to Sky voice “out for respect for Ms. Johansson.”
Even before OpenAI’s contacts with Johansson came to light, another drama was swirling around the company that raised doubts about its trustworthiness. Jan Leike, who had been co-leader of the company’s team working on ways to control future superpowerful AI systems resigned Thursday. A day later, he published a series of posts on X accusing OpenAI of prioritizing “shiny products” ahead of “safety culture” and warned that the company was on a dangerous trajectory.
Leike’s resignation followed the announcement earlier in the week that Ilya Sutskever, an OpenAI cofounder and its chief scientist, was departing the company. Sutskever had been part of the failed November effort by OpenAI’s nonprofit board, which controls its for-profit arm, to oust Altman. He had subsequently said publicly that he “deeply regretted” taking part in that abortive coup, but he left the board as part of the deal that brought Altman back and he had not returned to work since then.
With both Sutskever and Leike gone, OpenAI disbanded the entire “Superalignment” team that it had established less than a year earlier under Sutskever and Leike’s leadership, assigning its 25 remaining researchers to other departments. News site Vox reported that other researchers who had recently left OpenAI shared Leike’s concerns but were afraid to speak on the record because the company made them sign severance agreements that included draconian non-disparagement clauses, enabling the company to claw back any vested equity the former employee held if they criticized the company.
Altman later apologized for this clause in OpenAI’s exit paperwork, saying he had been unaware it existed and was “embarrassed” to learn about it. He said all employees should be able to keep their vested equity, that the company had never actually tried to enforce the provision, and that OpenAI was updating its severance agreements to remove the clause. Greg Brockman, OpenAI’s president, also published a statement, jointly signed by Altman, that explained that the company saw the best way to work on AI safety as rigorous testing of AI systems currently under development, rather than theoretical research into how to make future, more powerful AI software safe.
It was a swift reversal for a team OpenAI had unveiled less than a year ago with great fanfare. OpenAI had said back in July 2023 that figuring out how to control future “superintelligence”—an AI smarter than all humans—was so important that it was publicly committing to spending 20% of all the computing power it had secured at that point to this new Superalignment team. Well, today I reported in a Fortune story, that OpenAI never even honored that pledge. A half dozen people familiar with the Superalignment team’s work told me its requests for access to graphics processing units, the chips used for AI workloads, were repeatedly turned down even though they never came anywhere close to using the promised 20% allocation.
No matter what one thinks about the prospect of superintelligence, for a company to make a big public commitment to something and then simply fail to honor that promise should make people think twice about any other promises the company makes. And this brings us back to Microsoft.
Yesterday, Fortune also published a cover story I wrote about Microsoft. To report it, I followed Microsoft CEO Satya Nadella on a tour around Southeast Asia as he met with government officials, customers, and developers. Nadella has successfully steered the company through two major platform shifts—the first from on-premises server and PC to cloud, and the second to the current generative AI boom. The CEO told me he worries about two things that could imperil Microsoft’s success. The first is missing the next big technological wave. That’s why he remains hypervigilant about what developers are working on and asking for from the software giant, and also stays attuned to what’s happening in startups and research labs. But the other worry was that Microsoft would “lose its license to operate”—which Nadella told me was a social contract that depended on trust and on Microsoft helping to deliver broad-based economic growth in the countries where it operated.
Nadella saw its moves in AI, and its investments to bring AI and AI skills training to every corner of the globe, as a way to guarantee it kept its license to operate. But OpenAI’s growing public trust deficit could wind up undermining the other part of Nadella’s social contract. If the public comes to see OpenAI as an unethical actor, Microsoft’s reputation could sustain collateral damage too. The company is already facing lawsuits from artists, authors, creators, and publishers saying that it worked alongside OpenAI to use copyrighted material without permission to train AI models.
In the story, I look at how Nadella has made a series of other AI bets, beyond OpenAI. Those include a $1.5 billion investment and strategic partnership with Abu Dhabi-based tech company G42, a small investment in France’s Mistral AI, and the hiring of DeepMind cofounder Mustafa Suleyman and much of his team from Inflection AI to form a new consumer AI division. Suleyman was among those showcasing the new AI Copilot offerings at Build. These other bets are in some ways a hedge against technological risk: AI is still a protean technology. It may turn out that developers will prefer smaller, less capable, but cheaper, models, or that open-source technology will overtake massive proprietary technology like OpenAI is building.
But it's also a hedge on business and reputational risk. All the OpenAI and Microsoft executives I interviewed for the story expressed happiness with the current state of the two companies’ partnership. Nadella said OpenAI had given Microsoft the most high-performing AI model available with GPT-4, a model that was still ahead of the competition—although just barely based on some benchmarks—15 months after its launch. GPT-5 is by all accounts close to being introduced. Altman may feel that as long as OpenAI keeps delivering these models to Microsoft, it's golden. But I interviewed those execs before the current news cycle. If it turns out OpenAI’s models keep coming with big helpings of drama and a huge amount of ethical unease, Nadella may decide a decoupling is in order.
With that, here's more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
The Eye on AI News and Eye on AI Research sections of this newsletter were curated and written by Fortune's Sharon Goldman.