Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The New Daily
The New Daily
Technology
Alan Kohler

Alan Kohler: ChatGPT is in the world, and defying expectations and regulations

10 News First – Disclaimer

It was 30 years ago this month that I was first introduced to the internet.

I was editor of The Age, and I was looking at a coffee pot on a stove in San Francisco, doing what we now call streaming … to the world, or at least anyone who was connected to this new-fangled World Wide Web.

After a day or two of thinking about it and talking to the geek who showed me the coffee pot, I realised this was a big deal, and that among other things, classified ads were going to shift out of the newspaper and to this internet thing.

A revolution was coming, not just to my business but to the world.

Now it is happening again, this time with artificial intelligence (AI), or more specifically artificial general intelligence (AGI), and this time the ‘coffee pot’ is ChatGPT.

On Monday the Labor MP for Bruce, Julian Hill, gave a speech in federal Parliament that was partly written by ChatGPT, in which he argued that there needs to be a global effort to control and regulate AI and AGI, a bit like what has been happening with climate change.

That’s not a bad analogy actually, because, as I’ll argue, progress will be just as unlikely, although I don’t think Hill meant it that way.

In 26 years of meetings since the Kyoto Protocol, little has been achieved with the United Nations Framework Convention on Climate Change beyond the release into the atmosphere of tonnes of extra carbon dioxide exhaust from the lungs and larynxes of those attending.

AI is going to be the same.

ChatGPT made AI accessible, but the technology is making some uncomfortable. Photo: Getty

Canary in the coal mine

Everybody is having a lot of fun at the moment playing with ChatGPT – asking it to explain itself and say whether it’s going to take all of our jobs, as well as asking it to write columns and essays.

The website had 590 million visits from 100 million users in January, two months after it launched, and a UBS analyst said it was the fastest ramp-up he’d seen in 20 years of following the internet.

A judge in Colombia used it to check whether his ruling was correct, and a Nick Cave fan asked it to write a song in the style of Nick Cave, and sent the result to Nick Cave, who responded that “the song sucks”, and added that it was “a grotesque mockery of what it is to be human”.

Julian Hill raised the question of whether students could use it to cheat on their exams, and told Parliament: “AI technology, such as smart software that can write essays and generate answers, is becoming more accessible to students, allowing them to complete assignments and tests without actually understanding the material, causing concern for teachers, who are worried about the impact on the integrity of the education system.”

He then confessed that ChatGPT wrote that, not him, and concluded that AGI, which is the evolved, supersonic level of AI, posed risks that could be “disruptive, catastrophic and existential”.

That’s partly because of the application of AI to warfare, which he said could render “our current defensive capabilities obsolete”.

There’s a new version of ChatGPT coming soon, expected to be vastly superior to this one.

Hill told me this week: “ChatGPT is like the canary in the coal mine, the real issue to grapple with is the potential for AGI.”

And then on Tuesday Google launched its own conversational AGI service to compete with ChatGPT, called Bard.

There is actually quite a lot of work going on in regulating AI and AGI, but it’s tentative and does bear a passing resemblance to the largely ineffective work on climate change.

Australia has joined something called the Global Partnership of Artificial Intelligence (GPAI), which came out of the G7 and has 15 members.

But the GPAI doesn’t have the air of Sturm und Drang that Julian Hill was trying to convey on Monday – its four work streams are Responsible AI, Data governance, Future of Work and Innovation and Commercialisation.

Julian Hill spoke this week in Parliament about AI. Photo: AAP

So, earnest, but neither alert nor alarmed.

But what regulation, exactly, could and should be applied to AI? And could there ever be true global co-operation?

Especially, but not only because of its military applications, AI, like all technology, is a contest. That is most evident in the competition between China and the United States, but applies to some extent to all.

Will there suddenly be global co-operation in controlling this technology, but not others? Hardly. And even if there appeared to be some, would you believe it?

Nations have been turning up at climate change conferences for years piously saying they’ll cut emissions and then going home to do nothing, or worse.

And as with climate change, domestic action alone is next to useless.

Cat’s out of the bag

Like the internet, ChatGPT is everywhere and nowhere – it can’t be modified by a national government to refrain, say, from writing school essays, it can only be turned off and on at the border, and even then that’s very difficult unless you’re the Chinese Communist Party.

I don’t think that artificial intelligence will be controlled or regulated and we need to prepare for the consequences, just as I don’t think global warming will be kept to 2 degrees Celsius so we need to prepare for at least some amount of climate disaster, if not a lot.

A pioneer of the generative AI behind ChatGPT, Queensland-born Stanford University professor Christopher Manning, told the Financial Review this week that people will need to adapt to a world in which misinformation and false images are rife.

“We’re getting to a point where there should be regulation … but it’s difficult to do when things are moving fast,” he said.

This cat is out of the bag, and won’t be going back in. No one really wants it to anyway. Most governments are either entirely enthusiastic about the potential for AI, or at best divided, one ministry very excited, another a bit worried.

Julian Hill is right that it should be regulated, but it won’t be.
It’s already too late.

Alan Kohler writes for The New Daily twice a week. He is also founder of Eureka Report and finance presenter of ABC news

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.