A team of IBM researchers in association with UC Berkeley and Purdue University have managed to extract useful quantum computing out of one of today’s NISQ (Noisy Intermediate Scale Quantum) computers. The team used one of IBM’s latest Quantum Processing Units (QPU), Eagle, to perform calculations that were expected to fail in the midst of qubit noise. However, using a clever feedback mechanism between IBM’s 127-qubit Eagle QPU and supercomputers with UC Berkeley and Purdue University, IBM managed to prove it could derive useful results from a noisy QPU. The door to quantum utility is open – and we’re much earlier than expected.
Our NISQ-era quantum computers are roped-in to our standard supercomputers – the most powerful machines known to mankind, capable of trillions of operations per second. Powerful as they are, it’s a universal truth that when two subjects are roped together, they only move as fast as the slowest of them allows. And the supercomputer was already stretched thin for this experiment, using advanced techniques to keep up with the simulation’s complexity.
When the qubits’ simulation became too complex for the supercomputer to simply “brute force” the results, the researchers at UC Berkeley started using compression algorithms – tensor network states. These tensor network states (matrixes) are essentially data cubes, where the numbers that comprise the calculations are represented in a three-dimensional space (x, y, z) that’s capable of handling more complex information relationships and volumes than a more usual 2D solution - think of a simple Excel 2D table (x, y) and the many more rows you’d have to search through in that configuration if you had to consider another plane of information (z).
This means that there’s already some utility that can be extracted from NISQ quantum computers – there are matters where they can produce results that would be beyond the reach – at least in terms of time and money – to standard supercomputers, or where the hoops required to obtain those results would make the effort bigger than the gain.
There’s now a back and forth happening between solutions given by our NISQ-era quantum computers that feature a few hundred qubits (at best), and our standard supercomputers that feature trillions of transistors. As the number of available, useful qubits increases, circuits with depths deeper 60 used in the paper will be explored. As the number and quality of qubits increase, standard supercomputers too will have to keep up, crunching the numbers and verifying as deep a queue of quantum computing’s results as it possibly can.
“It immediately points out the need for new classical methods,” said Anand. And they’re already looking into those methods. “Now, we’re asking if we can take the same error mitigation concept and apply it to classical tensor network simulations to see if we can get better classical results.”
Essentially, the more accurately you can predict how noise evolves in your quantum system, the better you know how that noise poisons the correct results. The way you learn how to predict something is simply to prod at it and observe what happens enough times that you can identify the levers that make it tick.
Some of these levers have to do with how and when you activate your qubits (some circuits use more qubits, others require those qubits to be arranged into more or less quantum gates, with more complex entanglements between certain qubits… ) IBM researchers had to learn precisely how much and what noise resulted from moving each of these knobs within its 127-qubit Quantum Eagle – because if you know how to introduce noise, then you begin to control it. If you understand how it appears in the first place, you can account for it, which in turn allows you to try and prevent or take advantage of that happening.
But if you’re only running calculations on your noisy computer, how can you know those calculations are, well, correct? That’s where standard supercomputers – and the search for a ground truth – comes in.
The IBM team got access to two supercomputers - Berkeley National Lab’s National Energy Research Scientific Computing Center (NERSC) and at the NSF-funded Anvil supercomputer at Purdue University. These supercomputers would calculate the same quantum simulations that IBM ran on its 127-qubit Eagle QPU – divvied up as needed within them, and in ways that would allow the comparison of both results from the supercomputers. Now, you have a ground truth – the solution you know to be correct, achieved and verified by standard supercomputers. Now the light is green to compare your noisy results with the correct ones.
“IBM asked our group if we would be interested in taking the project on, knowing that our group specialized in the computational tools necessary for this kind of experiment,” graduate researcher Sajant Anand with UC Berkeley said. “I thought it was an interesting project, at first, but I didn’t expect the results to turn out the way they did.”
Then it’s “just” a matter of solving a “find the differences” puzzle: once you realize how exactly the presence of noise skewed the results, you can compensate for its presence, and glean the same “ground truth” that was present in the standard supercomputers’ results. IBM calls this technique Zero Noise Extrapolation (ZNE).
It’s a symbiotic process: the IBM team responsible for the paper is also looking to bring its error mitigation techniques – and equivalents to Zero Noise Extrapolation – to standard supercomputers. Between raw power increase from the most recent hardware developments and algorithm and technique optimizations (such as the usage of smart compression algorithms), raw supercomputing power will grow, allowing us to verify our quantum computing work just that little bit further into the era of post-NISQ quantum computers and their deployment of quantum error correction.
That’s the moment where the rope breaks, and quantum will be relatively free of the need to verify its results with classical techniques. That’s what’s slowing quantum computing down (beyond the absence of error correction that will allow qubits to perform the calculations themselves, of course).
In an interview with Tom’s Hardware for this article, Dr. Abhinav Kandala, manager at Quantum Capabilities and Demonstrations at IBM Quantum, put it beautifully:
Except with quantum, you can then increase the problem’s complexity beyond what supercomputers can handle – and because you have correctly modeled how noise impacts the system, you can still perform the cleanup steps on your noisy results… with some degree of confidence. The farther you are from the “conclusively truthful” results provided by standard supercomputers, the more likely you are to introduce fatal errors into the calculations that weren’t (and couldn’t be) accounted for on your noise model.
But while you can trust your results, you’ve actually delivered quantum processing capabilities that are useful, and beyond what can be achieved with current-gen, classical Turing machines like the supercomputer at Berkeley. It’s also beyond what was thought possible in our current NISQ (Noisy Intermediate Stage Quantum)-era computers. And it just so happens that many algorithms designed for near-term quantum devices would be able to fit within the 127 qubits in IBM’s Eagle QPU, which can deliver circuit depths in excess of 60 steps “worth” of quantum gates.
Dr. Kandala then added: “What we're doing with error mitigation that is running short depth quantum circuits and measuring what are called expectation values measuring properties of the state this is not the only thing that people want to do with quantum computers right I mean to unlock the full potential one does need quantum error correction and the the prevailing feeling was that for anything useful to be done one can only access that once you have an error corrected quantum computer
“The critical piece was being able to manipulate the noise beyond pulse stretching,” said Dr. Kandala. “Once that began to work, we could do more complicated extrapolations that could suppress the bias from the noise in a way we weren’t able to do previously.”
ZNE is likely to become a staple of any quantum computing approach – error mitigation is an essential requirement for the error-prone NISQ computers we currently have and will likely be required even when we arrive at the doorstep of error correction – an approach that sees certain qubits tasked with functions related to correcting errors in other qubits’ calculations.
The work done by IBM here has already had impact on the company’s roadmap – ZNE has that appealing quality of making better qubits out of those we already can control within a Quantum Processing Unit (QPU). It’s almost as if we had a megahertz increase – more performance (less noise) without any additional logic. We can be sure these lessons are being considered and implemented wherever possible on the road to a "million + qubits".
It's also difficult to ignore how this work showcases that there isn't really a race between quantum and classical: the future is indeed Fusion, to game a little with AMD's moto of old. That Fusion will see specific computing elements addressing specific processing needs. Each problem, no matter how complex, has its tool, from classical to quantum; and human ingenuity demands that we excel at using all of ours.
That proverbial rope between standard supercomputers and quantum computers only stretches so far – but IBM is finding cleverer and cleverer ways to extend its length. Thanks to this research, quantum computers are beginning to see that little bit ahead already. Perhaps Dr. Kandala will get to see what he hopes sooner than even he expects: the playground to quantum utility is now open ahead of schedule. Let's see what humans can do within it, shall we?