Skip to main contentIBM 

What is quantum utility?

For the first time in history, quantum computers are demonstrating the ability to solve problems at a scale beyond brute force classical simulation — where the only alternatives are carefully crafted, problem-specific classical approximation methods. Those who learn to harness these capabilities today could be among the first to achieve quantum advantage.

What is quantum utility?

14 Nov 2023

Robert Davis

This June, the scientific journal Nature published a landmark paper from researchers at IBM Quantum and UC Berkeley titled “Evidence for the utility of quantum computing before fault tolerance.”1 Since then, the IBM Quantum team has spent a lot of time talking about quantum utility. But what does this term really mean? How is it related to the concept of quantum advantage? And why do IBM researchers say we’re now entering a new era of quantum utility?

Simply put, quantum utility is what we get when a quantum computer is able to perform reliable computations at a scale beyond brute force classical computing methods that provide exact solutions to computational problems. Previously, these problems were accessible only to classical approximation methods — usually problem-specific approximation methods carefully crafted to exploit the unique structures of a given problem.

Now, computational scientists and other researchers can use quantum computers to tackle these large-scale problems as well. That’s an enormous milestone in the history of the field because, until relatively recently, all quantum computers were small, experimental devices primarily used for advancing the study of quantum computing itself. Entering the era of quantum utility means the quantum computers we have today are valuable, useful tools researchers can use to explore meaningful scientific problems.

Entering the era of quantum utility means quantum computers are valuable, useful tools researchers can use to explore meaningful scientific problems.

“We’re finally moving past the days when quantum computers were only useful for learning more about quantum computing,” said Katie Pizzolato, IBM’s director of Quantum Theory and Computational Science.

“These are scientific tools that are like nothing scientists have ever had access to before. We have an idea of the kinds of problems we want to start exploring with these tools, but there is a lot of exploring to do.”

Quantum utility vs. quantum advantage

In general, IBM researchers think of quantum utility as quantum computation that provides reliable, accurate solutions to problems that are beyond the reach of brute force classical computing methods, and which are otherwise only accessible to classical approximation methods.

Because quantum computing now offers a viable alternative to classical approximation, researchers say it is a “useful” tool for scientific exploration, or that it has “utility.” Quantum utility does not constitute a claim that quantum methods have achieved a proven speed-up over all known classical methods. This is a key difference from the concept of quantum advantage.

Because quantum computing now offers a viable alternative to classical approximation, researchers say it is a ‘useful’ tool for scientific exploration, or that it has ‘utility.’

IBM researchers think of quantum advantage as quantum computation that delivers a significant, practical benefit beyond either brute force or approximate classical computing methods, calculating solutions in a way that is cheaper, faster or more accurate than all known classical alternatives.

Researchers believe that quantum advantage will not occur as a single moment in time, but rather as an incremental journey — a growing collection of problems for which researchers first demonstrate practical relevance, and then quantum advantage.

“This is why we’re so excited to see what will happen once users start exploring more utility-scale problems with these devices,” said Sarah Sheldon, IBM’s senior manager of Quantum Theory and Capabilities.

“We haven’t yet found a practical problem for which quantum computers offer a meaningful speedup over classical methods, but the more users experiment with these systems, the more optimistic we are that it will happen.”

Why utility matters

Entering the era of quantum utility means that quantum computers have now reached a level of scale and reliability such that researchers who use them as a tool for scientific exploration may uncover groundbreaking new scientific insights. At the moment, this is especially relevant for researchers working on simulations of quantum systems.

When someone wants to model a quantum system of 20 qubits or so, they can get very good results using brute force classical methods. But once simulations grow to the 50-qubit range and beyond, it seems no amount of classical resources — whether CPUs, GPUs or TPUs — can overcome the need for clever, time-consuming, problem-specific approximations.

Powered by the growing scale and reliability of IBM Quantum hardware, and advances in runtime compilation that allow researchers to more efficiently execute collections of circuits with shared structure, IBM’s utility experiment demonstrated that quantum computers can deliver reliable results for simulation problems at a scale beyond 100 qubits. This marks a fundamental shift in the history of quantum computing.

“Getting reliable results at this scale is something many people doubted would ever be possible on current devices,“ said Kristan Temme, a principal research staff member in Quantum Theory and Capabilities at IBM. "Quantum computers will continue advancing, and so will classical approximation methods. Our hope is that we will start to see a back-and-forth between the two sides, which in time the quantum device will end up winning.”

To model utility-scale quantum systems with classical approximation methods, researchers must find ways to exploit the unique circuit structures of each individual simulation problem so they can come up with an approximation that works for that specific problem. Because utility-scale problems are too large to be verified with brute force classical methods, classical approximations do not offer strong guarantees of accuracy.

This verification issue is another arena where quantum computers will soon prove their value. Classical approximations are always based on some simplifying assumption, so researchers cannot guarantee their accuracy without some form of verification. Now that IBM has shown quantum computers can deliver reliable results for utility-scale problems, researchers can begin using quantum devices to help verify classical approximations.

Now that quantum computers can deliver reliable results for utility-scale problems, researchers can begin using quantum devices to help verify classical approximations.

Charting the course to quantum advantage

In many ways, IBM’s recent emphasis on quantum utility is an effort to introduce a more nuanced language for talking about how quantum computers are advancing, and to embrace the value quantum computers can offer even before demonstrating quantum advantage. However, it’s also about establishing a more pragmatic framework for planning the journey to quantum advantage.

“Utility is basically the first key step towards a demonstration of advantage,” said Abhinav Kandala, IBM’s manager of Quantum Capabilities and Demonstrations.

“You want to show that quantum machines can provide reliable results for problems at a scale beyond what we can do with brute force classical simulation. Once you show that, the next step is to find hard problems that are valuable to researchers and solvable with quantum computation. Being able to do both gets you quantum advantage.”


IBM’s path to quantum advantage

Step 1: Run quantum circuits faster on quantum hardware

Chart a path to develop quantum hardware and software that together runs noise-free estimators of quantum circuits faster than would be possible using classical hardware alone.

Step 2: Map interesting problems to quantum circuits

Find applications that can be solved only with quantum circuits, and which are known to be difficult to simulate using classical techniques. This can only be done in partnership with the wider quantum community.


IBM researchers say the journey to quantum advantage will be one in which utility-scale experiments empower researchers to first find applications with practical relevance, and then quantum advantage — one problem at a time. To begin this journey, organizations exploring quantum computing must move beyond classical simulations of quantum hardware and small experiments on devices with fewer than 100 qubits.

You need 100 qubits to accelerate discovery with quantum computing. And with the ability to run quantum circuits consisting of over 100 qubits and thousands of entangling gates, research organizations have a unique opportunity to make groundbreaking, fundamental advances to science.

You need 100 qubits to accelerate discovery with quantum. Read more.

Utility-scale experiments are crucial to helping the quantum community better understand which algorithms will scale up effectively to larger quantum systems, and in helping determine which applications have the greatest potential for advantage. Even today, research groups in fields like condensed-matter physics can use the current generation of 100+ qubit systems to explore new problem scales.

“We’re enabling enterprise and research organizations to use the capabilities that power utility-scale demonstrations so they can explore use cases at a non-trivial scale,” said Tushar Mittal, IBM’s head of product for Quantum Services.

“We’re entering a time where, now that we have the capabilities to take on large-scale problems, we need help from our partners to figure out which of their use cases will actually benefit. Ultimately, it will be our clients and partners who claim the first instances quantum advantage, not IBM.”

These research partnerships and the search for useful quantum computing applications also reveal useful information about the near-term hardware and software capabilities needed to make goals like error correction, fault tolerance, and ultimately quantum advantage a reality.

“Everything we’re doing to further the reach of the error mitigation techniques we use for utility-scale quantum computing today — lowering error rates, building faster processors, etc. All of that serves to reduce the overhead for error correction in the future,” Kandala said.

“So much of what we know about error correction today came out of IBM, and we continue to be leaders in quantum error correction research,” Pizzolato said. “But there isn’t a quantum system anywhere in the world that can actually implement error correction at scale yet, so we also need to put effort into continuously understanding what the systems we do have available are really capable of, because that is the only path to discovery.”

Reaching this understanding will require more quantum experiments and more benchmarking against ever-improving classical methods — a back-and-forth between quantum and classical methods. Kandala says we’ve already seen the beginnings of how this will play out in real life, pointing to the fact that the IBM-UC Berkeley utility experiment was quickly followed by numerous papers demonstrating new classical methods designed to match or exceed the IBM result.

In early July, the authors of the utility paper published a follow-up paper2 on arXiv to propose new classical methods for benchmarking their original experiment, and to compare their new methods with the classical approximation methods released by other groups. They found a roughly 20% discrepancy between the results of these new classical approximation methods, with the original IBM-UC Berkeley results falling well within that distribution.

“Quantum computing is finally proving itself as a computational tool for scientific exploration,” Kandala said. “I’m excited to see what we can do with the next set of improvements to quantum hardware, and with input from the community on where to look for challenging circuits.”

Quantum computing is finally proving itself as a computational tool for scientific exploration.


References

  1. Kim, Y., Eddins, A., Anand, S. et al. Evidence for the utility of quantum computing before fault tolerance. Nature 618, 500–505 (2023). https://doi.org/10.1038/s41586-023-06096-3

    |
  2. Anand, S., Temme, K., Kandala, A., Zaletel. M. Classical benchmarking of zero noise extrapolation beyond the exactly-verifiable regime. arXiv:2306.17839. https://doi.org/10.48550/arXiv.2306.17839

    |

View pricing