Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Independent UK
The Independent UK
Technology
Anthony Cuthbertson

AI model passes Turing Test ‘better than a human’

An image of multiple 3D shapes representing speech bubbles in a sequence, with broken up fragments of text within them - (Wes Cockx & Google DeepMind /BetterimagesofAI)

A leading AI chatbot has passed a Turing Test more convincingly than a human, according to a new study.

Participants in a blind test judged OpenAI’s GPT-4.5 model, which powers the latest version of ChatGPT, to be a human “significantly more often than actual humans”.

The Turing Test, first proposed by the British computer scientist Alan Turing in 1950, is meant to be a barometer of whether artificial intelligence can match human intelligence.

The test involves a text-based conversation with a human interrogator, who has to assess whether the interaction is with another human or a machine.

Nearly 300 participants took part in the latest study, which ran tests for various chatbots and large language models (LLMs).

OpenAI’s GPT-4.5 was judged to be a human 73 per cent of the time when instructed to adopt a persona.

“We think this is pretty strong evidence that [AI chatbots] do [pass the Turing Test],” Dr Cameron Jones, a postdoc researcher from UC San Diego who led the study, wrote in a post to X. “And 4.5 was even judged to be human significantly more often than actual humans.”

It is not the first time that an AI programme has beaten the Turing Test, though the researchers from UC San Diego who conducted the study claim this to be the most comprehensive proof that the benchmark has been passed.

Other models tested in the latest research included Meta’s Llama-3.1, which passed less convincingly, and an early chatbot called ELIZA, which failed.

Despite passing the Turing Test, the researchers noted that it does not mean that the AI bots have human-level intelligence, also known as artificial general intelligence (AGI). This is because LLMs are trained on large data sets in order to predict what a correct answer might be, making them essentially an advanced form of pattern recognition.

“Does this mean LLMs are intelligent? I think that's a very complicated question that's hard to address in a paper (or a tweet),” Dr Jones said.

“Broadly I think this should be evaluated as one among many other pieces of evidence for the kind of intelligence LLMs display.

“More pressingly, I think the results provide more evidence that LLMs could substitute for people in short interactions without anyone being able to tell. This could potentially lead to automation of jobs, improved social engineering attacks, and more general societal disruption.”

The research is detailed in a preprint study, titled ‘Large language models pass the Turing Test’.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.