An AI researcher and safety officer at ChatGPT creator OpenAI has quit the company, saying he is “pretty terrified” by the current pace of artificial intelligence.
Steven Adler, who has worked at the California-based company since March 2022 – eight months before the launch of ChatGPT – revealed that he was stepping down amid concerns about the trajectory of AI development.
“Honestly I’m pretty terrified by the pace of AI development these days,” he said.
“When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point?”
In a series of posts on X (formerly Twitter), Mr Adler referenced the race towards creating artificial intelligence that meets or exceeds human-level intelligence, known as artificial general intelligence (AGI).
OpenAI chief executive Sam Altman has consistently stated that achieving AGI is his objective, but in a way that “benefits all of humanity”.
Some leading AI researchers have warned that once AGI or superintelligence is achieved, then humans will no longer be able to control it.
A survey of AI researchers in 2022 found that the majority believed that the chance of AGI leading to an existential catastrophe for humanity was at least 10 per cent.
News of Mr Adler’s departure from OpenAI comes just days after Chinese startup DeepSeek released an AI model that rivals ChatGPT and others produced by US tech firms, which has redefined the global AI race.
“An AGI race is a very risky gamble, with huge downside,” Mr Adler said.
“No lab has a solution to AI alignment [ensuring AI’s objectives match those of humans]. And the faster we race, the less likely that anyone finds one in time.
“Today, it seems like we’re stuck in a really bad equilibrium. Even if a lab truly wants to develop AGI responsibly, others can still cut corners to catch up, maybe disastrously. And this pushes all to speed up. I hope labs can be candid about real safety regs needed to stop this.”