Weeks after he dismantled the OpenAI team focused on AI safety, CEO Sam Altman says he will lead a new team with the same charge.
The company, in a blog post Monday, announced the formation of the Safety and Security Committee, which it says will be responsible for making recommendations on critical safety and security decisions for all OpenAI projects.
The announcement follows the exit earlier this month of several key members of the safety committee, including cofounder Ilya Sutskever and Jan Leike. Leike was especially critical of OpenAI in his departure, accusing the company of neglecting “safety culture and processes” in favor of “shiny products.” He chose to depart the company, he said, because he has “been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point.”
Given those criticisms, Altman’s oversight of the safety committee could be scrutinized. Joining him will be directors Bret Taylor (who is chairman of the board), Adam D’Angelo, and Nicole Seligman.
“A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days,” the company wrote. “At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”
The blog post also revealed OpenAI is in the process of training its “next frontier model,” a successor to the one that currently drives ChatGPT, saying “we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI.”
News of the new safety committee comes less than 10 days after OpenAI dissolved the "Superalignment" team, folding remaining members into broader research efforts at the company. Leike and Sutskever were the lead members of that superalignment team.