Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - US
The Guardian - US
Comment
Robert Reich

The frantic battle over OpenAI shows that money triumphs in the end

‘How would you build an enterprise designed to gain as many of the benefits of AI as possible while avoiding these risks?’
‘How would you build an enterprise designed to gain as many of the benefits of AI as possible while avoiding these risks?’ Photograph: Kirill Kudryavtsev/AFP/Getty Images

How do we gain access to artificial intelligence’s huge potential benefits – such as devising new life-saving drugs or finding new ways to teach children – without opening a box of horrors?

If we’re not careful, AI could be a Frankenstein monster. It might eliminate nearly all jobs. It could lead to autonomous warfare.

Even such a mundane goal as making as many paper clips as possible, critics of AI argue, could push an all-powerful AI to end all life on Earth in pursuit of more clips.

So, how would you build an enterprise designed to gain as many of the benefits of AI as possible while avoiding these risks?

You might start with a non-profit board stacked with ethicists and specialists in the potential downsides of AI.

That non-profit would need vast amounts of expensive computing power to test its models, so the non-profit board would need to oversee a for-profit commercial arm that attracted investors.

How to prevent investors from taking over the enterprise? You’d have to limit how much profit could flow to the investors (through a so-called “capped profit” structure), and you wouldn’t put investors on the board.

But how would you prevent greed from corrupting the enterprise, as board members and employees are lured by the prospect of making billions?

Well, you can’t. Which is the flaw in the whole idea of private enterprise developing AI.

The non-profit I described was the governing structure that OpenAI began with in 2015, when it was formed as a research-oriented non-profit to build safe AI technology.

But ever since OpenAI’s ChatGPT looked to be on its way to achieving the holy grail of tech – an at-scale consumer platform that would generate billions of dollars in profits – its non-profit safety mission has been endangered by big money.

Now, big money is on the way to devouring safety.

In 2019, OpenAI shifted to a capped profit structure so it could attract investors to pay for computing power and AI talent.

OpenAI’s biggest outside investor is Microsoft, which obviously wants to make as much as possible for its executives and shareholders. Since 2019, Microsoft has invested $13bn in OpenAI, with the expectation of making a huge return on that investment.

But OpenAI’s capped profit structure and non-profit board limited how much Microsoft could make. What to do?

Sam Altman, OpenAI’s CEO, apparently tried to have it both ways – giving Microsoft some of what it wanted without abandoning the humanitarian goals and safeguards of the non-profit.

It didn’t work. Last week, OpenAI’s non-profit board pushed Altman out, presumably over fears that he was bending too far toward Microsoft’s goal of making money, while giving inadequate attention to the threats posed by AI.

Where did Altman go after being fired? To Microsoft, of course.

And what of OpenAI’s more than 700 employees – its precious talent pool?

Even if we assume they’re concerned about safety, they own stock in the company and will make a boatload of money if OpenAI prioritizes growth over safety. It’s estimated that OpenAI could be worth between $80bn to $90bn in a tender offer – making it one of the world’s most valuable tech startups of all time.

So it came as no surprise that almost all of OpenAI’s employees signed a letter earlier this week, telling the board they would follow Altman to Microsoft if the board didn’t reinstate Altman as CEO.

Everyone involved – including Altman, OpenAI’s employees, and even Microsoft – will make much more money if OpenAI survives and they can sell their shares in the tender offer.

Presto.

On Tuesday, OpenAI’s board reinstated Altman as chief executive and agreed to overhaul itself – jettisoning board members who had opposed him and adding two who seem happy to do Microsoft’s bidding – Bret Taylor, an early Facebook officer and former co-chief executive of Salesforce, and Lawrence Summers, the former Treasury secretary.

Satya Nadella, Microsoft’s chief executive, said he was “encouraged by the changes to OpenAI board”, calling it a “first essential step on a path to more stable, well-informed, and effective governance”.

Effective governance … for making gobs of money.

The business press – for which “success” is automatically defined as making as much money as possible – is delighted.

It had repeatedly described the non-profit board as a “convoluted” governance structure that prevented Altman from moving “even faster”, and predicted that if OpenAI fell apart over the contest between growth and safety, “people will blame the board for … destroying billions of dollars in shareholder value.”

Which all goes to show that the real Frankenstein monster of AI is human greed.

Private enterprise, motivated by the lure of ever-greater profits, cannot be relied on to police itself against the horrors that an unfettered AI will create.

Last week’s frantic battle over OpenAI shows that not even a non-profit board with a capped profit structure for investors can match the power of big tech and Wall Street.

Money triumphs in the end.

The question for the future is whether the government – also susceptible to the corruption of big money – can do a better job weighing the potential benefits of AI against its potential horrors, and regulate the monster.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.