
- A $30 billion valuation would make Ilya Sutskever’s Safe Superintelligence (SSI) one of the most valuable private AI companies.
Ilya Sutskever is raising more than $1 billion for his post-OpenAI startup at a valuation of over $30 billion, according to a report from Bloomberg.
The OpenAI co-founder's AI startup, Safe Superintelligence (SSI), is focused on developing AI that outsmarts humans in a safe way—but currently has no revenue.
The company was co-founded by Sutskever, Daniel Gross, and Daniel Levy in June last year, a month after Sutskever parted ways with OpenAI.
San Francisco-based VC firm, Greenoaks Capital Partners, is leading the deal and plans to invest $500 million, Bloomberg said in the report, citing a person familiar with the deal.
The new valuation would be a significant increase from the company's previous funding round in September, where it was valued at $5 billion. Last year, the company raised $1 billion from investors, including Sequoia Capital and Andreessen Horowitz.
Representatives for SSI did not immediately respond to a request for comment from Fortune, made outside normal working hours.
One of the most valuable private AI companies
A $30 billion valuation would make Safe Superintelligence one of the most valuable private AI companies.
Other private AI companies, such as Anthropic and Perplexity, have valuations of around $60 billion and $9 billion, respectively.
Elon Musk's xAI was last valued at about $51 billion but is reportedly in talks for a $10 billion funding round that would value the company at about $75 billion.
Unlike these companies, however, SSI does not have a product ready for market. In fact, not much is known about the company aside from its stated aim of building "a safe superintelligence."
"We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead," the company's website reads. "We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence."
Sutskever has said his new venture was born out of him identifying "a mountain that's a bit different from what I was working on."
He was previously OpenAI's chief scientist and co-chaired the company's "superalignment" team, which was focused on ensuring AI stays aligned with human values. The company disbanded the team after Sutskever and former co-led and head of alignment, Jan Leike, parted ways with the company.
Leike directly attributed his exit to safety concerns at OpenAI, saying he had "gradually lost trust" in the company's leads and accusing execs of letting safety processes take a "backseat to shiny products."
Sutskever has not criticized OpenAI publicly since he left the AI lab, however, he was a major player in the brief removal of CEO Sam Altman in November 2023.
At the time, Sutskever, who was one of the six board members of the nonprofit entity that controls OpenAI, said that firing Altman was "the board doing its duty."
But the next week, he expressed regret at having participated in Altman's ouster and, after Altman returned, he was removed from the board.