Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Evening Standard
Evening Standard
Business
Simon Hunt

What the Sam Altman-OpenAI debacle tells us about the AI industry

When Geoffrey Hinton, the so-called ‘Godfather of AI’ abruptly quit Google in May, among his motivations was his view that the senior team at Google had ceased to be a “proper steward” of AI technologies and an apparent fear that they were being led astray by commercial motivations.

Last week, a board member of UK-based Stability AI quit the firm, furious at his colleagues’ view that it was acceptable to use copyrighted work without permission to train its products.

We don’t know for certain why Sam Altman was sacked by the OpenAI board – his replacement, Emmett Shear, insists it was not over safety concerns – but it could well be a similar kind of worry.

There is an obvious tension in many AI firms right now between their ambitions as a business and potential risks that meeting those ambitions carry, both to the companies themselves and to wider society – risks which Rishi Sunak was keen to point out ahead of his AI safety summit earlier this month.

Open AI's board members may feel more comfortable with the dial turned down to two and thought Sam Altman had it turned up to eleven

Some inside these businesses are desperate to take the moral high ground and ensure that models are developed one step at a time, with risk kept to an absolute minimum in the process. Others see the AI industry’s advances as a race to the top, and perceive the biggest risk as losing ground to their big tech rivals in the way that OpenAI may well have ended up doing over the last few days.

Faced with this dilemma, different people within these organisations want to progress at different speeds, and internal tensions can rapidly build up.

In October, Emmett Shear tweeted: “I specifically say I’m in favor of slowing down, which is sort of like pausing except it’s slowing down. If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.”

So we can infer from his hire that the OpenAI board concurs with this approach. Its members – who are after all members of a non-profit thanks to its rather complex corporate structure – may feel more comfortable with the dial turned down to two and thought Sam Altman had it turned up to eleven.

But these choices are fundamentally a function of the AI industry being so nascent. Execs at a biotech business, for example, might not face a dichotomy about how much human harm they’re prepared to tolerate in a clinical trial, because there are strict rules already laid out for them on this.

AI firms are of course subject to the same laws as everyone else – but how quite to measure harm or risk, when it comes to developing a complex technology whose outputs cannot fully be anticipated – is a little tricky.

Until there is thorough cross border regulation, though, these kinds of fracas will continue to erupt within different AI businesses. The danger is that the most foolhardy firms end up dominating the industry – and the ones that give a damn are left behind.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.