Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Windows Central
Windows Central
Technology
Kevin Okemwa

OpenAI is reportedly prioritizing shiny products over safety processes (again) — yet there's a 99.999999% probability AI will spell inevitable doom to humanity

OpenAI and Microsoft logos.

Last year, after OpenAI CEO Sam Altman was ousted from the company for not being “consistently candid,” a handful of high-profile executives, including former Head of Alignment Jan Leike, left the firm.

The executive revealed that he'd disagreed with OpenAI's leadership over its safety strategy, further indicating that safety processes and culture had taken a back seat as the company prioritized shiny products like AGI.

While OpenAI seemingly dismissed the claims, a new report by the Financial Times seemingly corroborates that the ChatGPT maker's safety processes have taken a back seat as shiny products gain precedence.

The report suggests that the company has significantly slashed the time allocated to safety procedures and testing of flagship AI models. According to the report, the safety division and third-party groups were recently given "just days" to conduct evaluations on OpenAI's latest models.

According to the source, testing processes have become less thorough compared to the past. As a result, the staff had less time and resources to identify and mitigate potential dangers.

The outlet further disclosed that OpenAI's change in strategy is a blatant attempt to maintain its lead in the AI landscape as competition thickens. New players like China's DeepSeek are emerging with AI models that surpass OpenAI's latest reasoning model across a wide range of benchmarks at a fraction of the development cost.

“We had more thorough safety testing when [the technology] was less important,” indicated a person well-versed in the development and testing of OpenAI's yet-to-launch o3 model. The source further disclosed that as these AI models scale and become advanced, the threat to humanity increases, too.

“But because there is more demand for it, they want it out faster. I hope it is not a catastrophic misstep, but it is reckless. This is a recipe for disaster.”

Speculations suggest that OpenAI could be getting ready to ship its o3 model as soon as next week, giving testers less than a week to evaluate and perform safety checks.

This isn't OpenAI's first rodeo with safety processes

(Image credit: Getty Images | NurPhoto)

This isn't the first time OpenAI has been on the spot for rushing through safety processes. In 2024, a separate report suggested that OpenAI rushed through GPT-4o's launch, leaving the safety team with little time to test the model.

Perhaps more concerning, the company reportedly sent RSVP invites for the product launch celebration party before the safety team ran tests. "They planned the launch after-party before knowing if it was safe to launch," the source added. "We basically failed at the process."

In comparison, testers had up to six months to evaluate GPT-4 before it shipped. A person well-versed in the situation revealed that evaluation and safety tests unearthed dangerous capabilities two months into the testing phase.

According to the source:

“They are just not prioritising public safety at all."

To that end, multiple reports suggest that advances could lead to inevitable doom. AI safety researcher Roman Yampolskiy indicated that there's a 99.999999% probability AI will end humanity, according to p(doom).

However, OpenAI claims that it has improved the safety processes by automating some of the tests, which has allowed the company to reduce the time allocated for testing. Additionally, the ChatGPT maker indicated that the AI models have been tested and mitigated to avoid catastrophic risks.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.