Jan Leike, OpenAI’s head of alignment whose team focused on AI safety, has resigned from the company, saying that over the past years, “safety culture and processes have taken a backseat to shiny products.”
In a post on X, the former Twitter, Leike added that he had been “disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we reached a breaking point.”
OpenAI is “shouldering an enormous responsibility on behalf of humanity,” he continued. "We are getting long overdue in getting incredibly serious about the implications of AGI [artificial general intelligence."
Leike’s resignation comes just a couple of days after his co-lead on OpenAI’s ‘Superalignment’ team, chief scientist Ilya Sutskever, announced he was leaving the company. In his post announcing his departure, Sustskever wrote that he was “confident that OpenAI will build AGI that is both safe and beneficial.”
Both Leike and Sutskever’s departures come after months of speculation about what happened in November 2023, when OpenAI nonprofit board fired CEO Sam Altman and removed president Greg Brockman as chairman. Even after Altman was reinstated to his role as CEO and to a position on the board, it was clear that issues around the safety of the AI OpenAI is building was a point of contention among members of the board and others focused on AI safety within the company. After Altman was reinstated, Sutskever seemed to disappear, with many wondering whether he had been ousted.
Today, Bloomberg reported that OpenAI has dissolved Leike and Sutskever's 'Superalignment' team, which will be folded into broader research efforts at the company.
At the end of his post thread on X, Leike spoke directly to OpenAI employees: "To all OpenAI employees, I want to say: Learn to feel the AGI. Act with the gravitas appropriate for what you're doing. I believe you can 'ship' the cultural change that's needed. I am counting on you."