Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Windows Central
Windows Central
Technology
Kevin Okemwa

Sam Altman says OpenAI is no longer "compute-constrained" — after Microsoft lost its exclusive cloud provider status

OpenAI CEO Sam Altman is seen on a mobile device screen.

Sam Altman recently indicated that OpenAI is no longer compute-constrained, making it easier for the company to develop sophisticated and advanced AI models without any limitations and restrictions.

As part of its multi-billion dollar partnership with Microsoft, OpenAI had exclusive access to the tech giant's vast computing power. However, multiple reports suggested that Microsoft didn't meet the computing threshold required by OpenAI to facilitate its computing needs.

The AI lab expressed fears of other rival firms hitting the coveted AGI benchmark before it did, further indicating that it would be Microsoft's fault. Consequently, OpenAI managed to find some wiggle room in its agreement with Microsoft, following its $500 billion Stargate project announcement to build data centers across the United States to facilitate its AI efforts.

Microsoft ended up losing its exclusive cloud provider and largest investor status as OpenAI seemingly tightened it partnership with SoftBank, which recently led its latest round of funding, raising $40 billion from key investors, pushing its market cap to $300 billion.

It'd only take 5 to 10 people to build GPT-4 from scratch

GPT-4 would, apparently, no longer require an army of people to build. (Image credit: Getty Images | SOPA Images)

As you may know, OpenAI is getting ready to discontinue GPT-4 and replace it with GPT-4o in ChatGPT. However, the company indicated that users can continue accessing the model via its API.

OpenAI CEO Sam Altman has openly expressed his disappointment with the model, citing that it "kind of sucks" and is mildly embarrassing at best. He further indicated that GPT-4 would be the dumbest model that users would have to interact with ever, potentially suggesting an upward trajectory in terms of capabilities for OpenAI's flagship models.

Perhaps more interestingly, while speaking to engineers behind the development of GPT-4.5, the executive asked them what it would take to build and develop GPT-4 again.

For context, the CEO revealed that when GPT-4 was being developed, it took "hundreds of people, almost all of OpenAI's effort." Interestingly, things have seemingly gotten easier as generative AI has scaled and advanced to greater feats.

Interestingly, Alex Paino, lead for GPT-4.5 pretraining machine learning, indicated that the GPT-4's development process would probably only need 5 to ten people (via Business Insider).

According to the engineer:

"We trained GPT-4o, which was a GPT-4-caliber model that we retrained using a lot of the same stuff coming out of the GPT-4.5 research program. Doing that run itself actually took a much smaller number of people."

OpenAI researcher Daniel Selsam seemingly reiterated Paino's sentiments, indicating that building GPT-4 from scratch would be much easier now. "Just finding out someone else did something — it becomes immensely easier," the researcher added. "I feel like just the fact that something is possible is a huge cheat code."

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.