Google says one of its quantum computers has been able to solve a problem that would be practically impossible to do on a conventional computer, becoming the first to achieve so-called "quantum supremacy".
Using a processor with programmable superconducting qubits, the Google team was able to run a computation in 200 seconds that they estimated the fastest supercomputer in the world would take 10,000 years to complete.
The news was first reported last Friday by the Financial Times, after a paper about the research was uploaded to a NASA website and then taken down.
"To our knowledge, this experiment marks the first computation that can only be performed on a quantum processor," the Google AI Quantum team and their collaborators wrote in the paper, which the ABC has seen.
It's a milestone, said quantum physicist Steven Flammia of the University of Sydney, who was not involved in the study.
"Prior to this experiment, there was no convincing demonstration of a quantum computation that someone had done on a programmable quantum device that couldn't be done on a conventional computer," Professor Flammia said.
And it's not the problem that the team solved that's important, but rather what it represents, according to quantum physicist Andrew White of the University of Queensland.
"The problem they've done here is a fairly artificial problem," Professor White said.
"It's not going to cure cancer or solve global warming, but it's very interesting."
Conceptually, it's interesting, he said, because it's the biggest indication yet that our current classical model of computing, built on the work of mathematicians like Alonzo Church and Alan Turing, isn't the full picture.
As the researchers describe in the paper, the Extended Church-Turing Thesis states that "any 'reasonable' model of computation can be efficiently simulated by a Turing machine".
It's the basis of all the computing we do, from the computer I'm writing this article on to the smartphone you might be reading it on.
What quantum computing does is that it says that the Extended Church-Turing Thesis is wrong.
"It's [been] the foundation of theoretical computer science for decades, so that's big news," Professor White said.
How they did it
What the research team did was set up a randomised benchmarking problem of a quantum circuit with a system of 53 qubits.
The most informative measurement you can do to verify that a circuit works is something called process tomography.
"The problem is you need an exponentially growing number of measurements to use this method," Professor White said.
For a two-qubit circuit you need to do at least 256 measurements, but by the time you get to a 53-qubit circuit, it's at least six thousand trillion trillion trillion trillion measurements — and you can't do it.
Instead, you use randomised benchmarking, whereby you pick random logical states to send into the circuit — which, given we're talking about a quantum circuit, there are infinite possibilities — and then you measure what the output is.
While you can do that with the quantum computer itself very quickly, modelling it on a classical computer takes a long time. That means testing the measurements you've done on the quantum circuit against the predictions made by a classical computer is extremely difficult.
"This randomised benchmarking is a way of proving the principle that these quantum machines can do something more efficiently that conventional computing," Professor White said.
"But it doesn't yet solve any problem that we are aware of."
Lost in the noise
That's because we currently think that, in order for us to do something useful with a quantum computer, we need to have 'logical qubits' or 'error-corrected qubits', Professor Flammia said.
"At the moment all of our qubits inside these devices are very noisy [error-prone]," he said.
"If you run the computation for a long enough amount of time the noise eventually increases, increases, increases and there's no counter-process that's removing that noise.
"So eventually you're swamped by noise, and then you get nothing out."
Professor Flammia compares it to trying to bail out a boat that's got a small leak in it.
If the leak is letting in water to the boat at a faster rate than you can bail it out, you will sink. But if can bail faster than the leak is letting in water, you're in good shape.
"Currently we're on the bad side of that threshold," he said.
"We don't yet have a way to translate those computations into useful computations that we can sustain as long as we'd like to get the answers that we want, or grow the machine to be as big as we'd like to do practical useful computation."
Such computations could include better understanding nitrogen fixation, modelling superconductivity, quantum chemistry calculations and any number of material science questions.
And no one yet has a great idea about what problems we could solve on a noisy device.
"That's the extent to which this is so tantalising but also kind of frustrating because like, wow, we know now that this is a powerful technology, but really this is just the start," Professor Flammia said.
Google did not respond to the ABC's request for comment.