Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Windows Central
Windows Central
Technology
Kevin Okemwa

A former OpenAI employee left after claiming it felt like the 'Titanic of AI' with top execs prioritizing shiny products over safety

Microsoft Designer AI generated image of a ship captain looking out at an iceberg.

What you need to know

  • OpenAI was previously placed under fire for prioritizing shiny products over safety.
  • A former employee has corroborated similar sentiments while referring to the company as the Titanic of AI.
  • The ex-employee says OpenAI's safety measures and guardrails won't be able to control and prevent AI from spiraling out of control if the company doesn't embrace critical and sophisticated measures.

OpenAI has been hitting the headlines for the past few months (debatably for all the wrong reasons). It started when most of its safety team departed from the company, including the former head of alignment, Jan Leike, who indicated the company is focused on shiny products as safety culture and processes take a back seat

As it turns out, a former OpenAI employee, William Saunders, has seemingly echoed similar sentiments. While speaking on Alex Kantrowitz's podcast on YouTube earlier this month, Saunders indicated:

"I really didn't want to end up working for the Titanic of AI, and so that's why I resigned. During my three years at OpenAI, I would sometimes ask myself a question. Was the path that OpenAI was on more like the Apollo program or more like the Titanic? They're on this trajectory to change the world, and yet when they release things, their priorities are more like a product company. And I think that is what is most unsettling."

OpenAI CEO Sam Altman hasn't been shy about his ambitions and goals for the company, including achieving AGI and superintelligence. In a separate interview, Altman disclosed that these milestones won't necessarily constitute a dynamic change overnight. He added that interest in tech advancements is short-lived and may only cause a two-week freakout.

The former lead of super alignment for OpenAI revealed he disagreed with top OpenAI executives over the firm's decision-making process and core priorities on next-gen models, security, monitoring, preparedness, safety, adversarial robustness, and more. This ultimately prompted his departure from the company as well. 

In the grand scheme of things, it's highly concerning if the ChatGPT maker is prioritizing shiny products over safety despite Altman openly admitting there's no big red button to stop the progression of AI.

Safety is OpenAI's biggest issue

Advancements in the AI landscape are majorly riddled by safety and privacy concerns. Remember Windows Recall? Microsoft's privacy nightmare and a hacker's paradise. The Redmond giant recently unveiled crazy next-gen AI features shipping exclusively to Windows 11 Copilot+ PCs, including Live Captions, Windows Studio effects, and the show-stopper, Windows Recall.

In theory and paper, Windows Recall seemed cool and useful (debatable). However, it was riddled with many privacy issues that even attracted the UK data watchdog's attention. The AI-powered feature received backlash, prompting Microsoft to recall it even before it shipped.

OpenAI is in a similar ship but on a larger scale. Saunders compares OpenAI safeguards to the infamous Titanic ship and states that he'd prefer the company embrace the 'Apollo Space program approach.' For context, the program was a NASA project involving American astronauts making 11 spaceflights and walking on the moon.

He added that the firm is overreliant on its current measures and is seemingly tone-deaf to the rapid advancements of its advances. He says OpenAI could be well off if it embraced the Apollo program approach.

Even when big problems happened, like Apollo 13, they had enough sort of like redundancy, and were able to adapt to the situation in order to bring everyone back safely

Willian Saunders, Former OpenAI employee

Saunders says the team behind the Titanic's development was more focused on making the ship unsinkable but forgot to install sufficient lifeboats in the unfortunate event that disaster would strike. As a result, many people lost their lives due to the lack of preparedness and overlooking important safety measures.

While speaking to Business Insider, Saunders admitted the Apollo Space program faced several challenges. In the same breath, "It is not possible to develop AGI or any new technology with zero risk," he added. "What I would like to see is the company taking all possible reasonable steps to prevent these risks."

Saunders predicts a 'Titanic disaster' forthcoming that could lead to large-scale cyberattacks and the development of biological weapons. Saunders says OpenAI should consider investing in more "lifeboats" to prevent such occurrences. This includes delaying the launch of new LLMs to give research firms ample time to assess potential danger and harm that could stem from the premature release of the models. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.