Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Comment
Chris Stokel-Walker

ChatGPT on your iPhone? The four reasons why this is happening far too early

‘The technology needs to be better prepared for prime time.’
‘The technology needs to be better prepared for prime time.’ Photograph: Angga Budhiyanto/ZUMA Press Wire/REX/Shutterstock

Tech watchers and nerds like me get excited by tools such as ChatGPT. They look set to improve our lives in many ways – and hopefully augment our jobs rather than replace them.

But in general, the public hasn’t been so enamoured of the AI “revolution”. Make no mistake: artificial intelligence will have a transformative effect on how we live and work – it is already being used to draft legal letters and analyse lung-cancer scans. ChatGPT was also the fastest-growing app in history after it was released. That said, four in 10 Britons haven’t heard of ChatGPT, according to a recent survey by the University of Oxford, and only 9% use it weekly or more frequently.

Until now, perhaps. Earlier this week, at its Worldwide Developers Conference – another thing tech watchers and nerds like me get excited by – Apple announced that it had brokered a deal to bring ChatGPT to iPhones. This is huge because, in the UK, Apple has nearly as many iPhones in people’s pockets as all the other competitors put together. What it decides shapes society.

However, I think it is, at the very least, far too early to be deploying this kind of technology at scale. This is why.

The technology still isn’t quite right

OpenAI’s demonstration of its GPT-4o model last month was a glimpse into the potential future that multimodal (read: voice, video and text-activated) AI chatbots could bring us. They showed it translating from one language to another, and wittily chatting with a human participant. But since it was ultimately an advert, it’s important to be sceptical: the software was unnaturally verbose, with the OpenAI representative seeming to have to interrupt to get a word in. If, as the companies suggest, this technology is going to replace human interaction in some cases, it needs to be better prepared for prime time.

It’s not genuine ‘intelligence’ – it’s a fallible tool

Users of generative AI tools often struggle to identify what exactly they’re interacting with. People tend to anthropomorphise ChatGPT and other similar tools. They are, at heart, pattern-matching machines designed to please. They don’t know right from wrong. Frankly, they don’t know … well, anything, as that’s a flesh-and-blood capability.

Yet we’re still convinced of these tools’ mental supremacy when compared with humans, even when they bumble through incorrect answers and make catastrophic errors. (A ChatGPT blunder about countries in Africa beginning with the letter K – it claimed there were none – has even poisoned Google search results.) The worry is that when everyone with an iPhone starts using AI, they’ll start relying and trusting its often shaky judgments. Pattern-matching is often wrong, and AIs can “hallucinate” – ie just make stuff up.

The technology still reflects our biases

The big secret of generative AI is that it’s only as good as its training data – and its training data is frequently biased. Most AI models are trained on data that is scraped from the internet, which has large gaps and is infrequently distributed when it comes to language, race and gender. What efforts have been made to counteract this are often crowbarred in with unreliable results – for instance, Google Gemini, a similar AI tool, for a time generated images of Black second world war-era German soldiers. Bias and ahistorical ignorance is a problem at the best of times – when it’s baked into a technology that many people think of as impartial because it’s computerised, it becomes an even bigger one.

Is this really wanted?

No one asked for the generative AI wave to wash over us in November 2022 with the release of ChatGPT, and most people have not been clamouring for Apple to join the AI race, either. ChatGPT reported 100 million monthly active users just two months after its launch, but its comms from this year suggest that figure hasn’t changed substantially. The revolutionary new technology is making Silicon Valley businesses plenty of money, but there’s a growing recognition that, more than ever, we are the product, and our data is being used.

It’s little wonder, then, that Apple went to such pains to explain how it would be securing user data as it adopts AI compared to its competitors. Its private cloud compute strategy means that no one, not even Apple itself, can snoop in on conversations you have with generative AI tools. That’s a positive and convincing way to head off concerns. But it’s also worth pointing out that it’s not the reason people haven’t been signing up to AI services in their droves. It’s because appetite has been low.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.