Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Zenger
Zenger
World
Zachy Hennessey

OpenAI Co-Founder Launches Safe Superintelligence Venture Amid Safety Concerns‌OpenAI Cofounder Starts New Firm Dedicated To Safe AI‌

Ilya Sutskever, former cofounder and chief scientist of OpenAI, opens his own SSI firm based in Tel Aviv and Palo Alto. OPENAI.

After leaving the AI supergiant due to safety concerns, Ilya Sutskever, one of the founders of OpenAI, is striking out on a new artificial intelligence venture: Safe Superintelligence (SSI).

As the name might suggest, the new firm will focus on the development of artificial intelligence technology that is “safe” — which could mean anything from “not going to leak secure information” to “not going to be racist” to the unlikely-but-still-possible “not going to worm its way into one of those Boston Dynamics robots and do a Terminator.”

“We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence,” said Sutskever in his announcement of the Palo Alto- and Tel Aviv-based company.

Joining Sutskever in cofounding SSI are tech whizzes Daniel Gross and Daniel Levy.

The former Daniel was selected for TIME magazine’s 100 most influential people in the field of artificial intelligence last year; the latter Daniel was most recently a senior developer at OpenAI. Each Daniel boasts a resume that would allow him to confidently start a new AI venture with an OpenAI cofounder.

‘Fine, Sam, I’ll make my own AI company’

Sutskever, an Israeli citizen, was OpenAI’s chief scientist, and leader of the push to boot OpenAI’s CEO, Sam Altman, from the company last November. This was seen as a shock, as the two had seemingly worked closely together in the months prior, including a joint lecture at Tel Aviv University that summer.

The board of directors (including Sutskever) — who disapproved of Altman’s approach to AI safety — initially succeeded in kicking Altman out of the big boss chair, but he was quickly reinstated five days later in response to heavy pressure from investors and employees.

Sutskever resigned from the company six months after that, alongside fellow exec Jan Leike. The two were responsible for OpenAI’s AI risk team, which crumbled following their departure.

This background lends important context to the new company’s entire identity as laid out in the company’s announcement.

“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors and business model are all aligned to achieve SSI,” it reads.

“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security and progress are all insulated from short-term commercial pressures,” the announcement continues.

“Building safe superintelligence is the most important technical problem of our​​ time.”

        Produced in association with ISRAEL21c

        Sign up to read this article
        Read news from 100’s of titles, curated specifically for you.
        Already a member? Sign in here
        Related Stories
        Top stories on inkl right now
        One subscription that gives you access to news from hundreds of sites
        Already a member? Sign in here
        Our Picks
        Fourteen days free
        Download the app
        One app. One membership.
        100+ trusted global sources.