Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Christiaan Hetzner

Ilya Sutskever left OpenAI after mutinying against Sam Altman—now he’s launching his own startup for safe AI

Former OpenAI chief scientist Ilya Sutskever (Credit: Jack Guez—AFP/Getty Images)

The OpenAI chief scientist who nearly brought down CEO Sam Altman in a failed November mutiny as brief as it was spectacular is launching an AI company of his own. 

Ilya Sutskever revealed on Wednesday he was teaming up with OpenAI colleague Daniel Levy and Daniel Gross, a former AI executive at Apple, to found Safe Superintelligence Inc., a moniker chosen to reflect its purpose.

“SSI is our mission, our name and our entire product roadmap, because it is our sole focus.” the three wrote in a statement on the U.S. startup’s barebones website. Building safe superintelligence, they went on to argue, was “the most important technical problem of our time.”

Artificial superintelligence, or ASI, is believed to be the ultimate breakthrough in AI, since experts predict machines will not stop developing once they reach the kind of general-purpose intelligence know as AGI that is comparable to humans. 

Luminaries in the field like the computer scientist Geoffrey Hinton believe ASI to be an existential danger to mankind and building safeguards that align with our interest as a species was one of the top missions Sutskever had at OpenAI.

His high-profile departure in May came almost six months to the day after he joined independent board directors Helen Toner, Tasha McCauley and Adam D’Angelo in removing Altman as CEO against the will of chair Greg Brockman, who immediately resigned. 

Sutskever came to regret his role in briefly ousting Altman

The spectacular coup, which Toner recently blamed on a pattern of deception by Altman, threatened to tear the company apart. Sutskever quickly expressed his regret and reversed his position, demanding Altman be reinstated to prevent the potential downfall of OpenAI. 

In the aftermath, Toner and McCauley left the non-profit board, while Sutskever seemingly vanished from the public eye all the way up until he announced his departure last month. 

In his resignation announcement, he implied he was voluntarily going to commit to a project “very personally meaningful to me” and promised to share details at a later unspecified date.

His departure nonetheless set in motion events that quickly revealed deep governance issues that appeared to confirm the board’s initial suspicions. 

First, Sutskever's co-lead Jan Leike accused the company of breaking its promise to give their AI safety team 20% of its compute resources and resigned. Later it emerged employees at OpenAI were slapped with water-tight gag orders that forbade them from criticizing the company after they left, at the penalty of losing their vested shares

Finally, actress Scarlett Johansson—who portrayed an AI chatbot in Spike Jonze’s 2013 sci-fi film Her—then sued the company claiming Altman effectively stolen her voice to use for their latest AI product. OpenAI refuted the claim but pledged to change the sound anyway out of respect for her wishes.

These instances suggested OpenAI had abandoned its original purpose of developing AI that would benefit all of humanity—instead to pursue commercial success.

“The people interested in safety like Ilya Sutskever wanted significant resources to be spent on safety, people interested in profits like Sam Altman didn’t,” Hinton told Bloomberg last week. 

A leader in the field since AI's Big Bang Moment

Sutskever has long been one of the brightest minds in the field of AI, researching artificial neural networks that conceptually mimic the human brain in order to train computers to learn and abstract based on data. 

In 2012, he teamed up with Hinton to collaborate on the 2012 landmark development  of Alex Krizhevsky’s deep neural network AlexNet, commonly considered AI’s Big Bang moment. It was the first machine learning algorithm that could accurately label images fed to it, revolutionizing the field of computer vision. 

When OpenAI was founded in December 2015, Sutskever received top billing over co-chairs Altman and Elon Musk even though he was only research director. That made sense at the time, as it was formed originally as a non-profit that would create value for everyone rather than shareholders, prioritizing “a good outcome for all over its own self-interest”.

Since then, however, OpenAI has effectively become a commercial enterprise, in Altman’s words ‘to pay the bills’ for its compute-heavy operations. In the process it adopted a complicated structure with a new for-profit entity where returns were capped for investors like Microsoft and Khosla Ventures, but control remained with the non-profit board.

Altman called this convoluted governance necessary at the time in order to keep everyone on board. Recently The Information reported he sought to change OpenAI's legal structure, opening the door for a controversial IPO.

Sutskever’s new commercial enterprise dedicated to safe superintelligence will be located in Silicon Valley’s Palo Alto and Tel Aviv, Israel, in order to best recruit top talent. 

“Our team, investors and business model are all aligned to achieve SSI,” they wrote, pledging there would be “no distraction by management overhead or product cycles”.

How he and his two co-founders aim to both create ASI endowed with robust guardrails while also paying the bills and earning a return for their investors was not immediately clear from the statement, however. Whether it too has a capped for-profit structure, for example, was not revealed. 

They merely said only that the business model of Safe Superintelligence was designed from the outset to be “insulated from short-term commercial pressures.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.