Quantum Advantage with NISQ Devices
- QCR by GQI

- Oct 31, 2021
- 4 min read
By Dr. Brajesh Gupt
Today, after over four decades of research, quantum computing hardware is no longer a distant dream. While still limited by noise and lacking in fault tolerance and error correction, current quantum hardware, commonly known as intermediate-scale quantum (NISQ) devices, have propelled efforts to achieve the first instance of quantum advantage. This has been made possible by recent developments in near term quantum algorithms--such as variational quantum algorithms--which can potentially harness the power of NISQ devices despite the noise. It is expected that quantum advantage with NISQ devices will have many scientific and industrial applications.
The first applications of quantum advantage with NISQ algorithms are expected in the sectors of quantum chemistry, machine learning, combinatorial optimization, and quantum linear algebra, which are poised to directly affect the industry of drug discovery, robotics and automation, logistics, finance and quantum foundations. For example, variational quantum eigensolver (VQE) has seen promising advances in studying molecular quantum chemistry using quantum computers, which can potentially help find the ground state or molecular Hamiltonian faster than classical computers and eventually lead to improved understanding of protein folding dynamics and drug discovery. Furthermore, in supervised machine learning, quantum Kernel estimation and variational quantum classifiers are two leading methods which can use NISQ devices. In unsupervised machine learning, NISQ devices can be used to build quantum Boltzmann machines, quantum generative adversarial networks, and quantum circuit Born machines. In the sector of combinatorial optimization, quantum approximate optimization algorithm (QAOA) lends itself naturally to solving problems like Max-Cut and Max-Sat problems to which many optimization problems in the fields like logistics, electronic circuit layout design, statistical physics and automotive configurations can be mapped. In a recent preprint, Google quantum team proposed a new paradigm of near term quantum algorithms in 16 bit quantum computer was used to perform ground-state many-electron calculations. This provides yet another example where quantum advantage can be achieved using NISQ devices in near future.
With continuous innovation in quantum nanoelectronics, the size and quality of quantum devices have evolved significantly over the last 5 years. IBM, Google, Rigetti, IonQ, ColdQuanta and Honeywell are among major industry players in building quantum hardware. As of today, IBM has 65 qubit hardware, Google 54 qubit, Rigetti 31 qubit, and IonQ 32 qubit hardware which can be accessed through their respective cloud services. However, these devices are not enough to show quantum advantage, which may require around 200-300 qubit sizes in order to beat current classical computers at a useful task on hand. Scaling to such sizes, though not easy, is at the forefronts of quantum hardware research. For example, IBM has announced plans to bring 1000 qubit hardware by 2023 with an unclear timeline on a million qubit device. Google, on the other hand, has a 10 year timeline on building a million qubit quantum hardware with no clear intermediate milestones on thousand qubit systems similar to IBM’s. Furthermore, companies like Atom Computing and Pascal are expected to launch quantum devices with approximately 100-200 qubits. Then there are companies like PsiQuantum who are solely focusing on the final goal of building error corrected, fault tolerant quantum devices with over a million qubits.

Figure 1: The quantum advantage from NISQ devices is expected to find first applications to various sectors including pharmaceuticals, machine learning, quantum chemistry, logistics and finance among others.
There are two main reasons why NISQ applications don’t require fault tolerant, fully error corrected hardware: (i) NISQ algorithms are inherently noise resilient so that the global minimum of the cost landscape remains unaffected by various hardware noises, (ii) NISQ algorithms are approximate and one doesn’t need an exact computation unlike, for instance, Shor’s factoring algorithm which requires exact numbers and is extremely sensitive to errors. Consider VQE for example: One begins with an ansatz to approximate the quantum state and then uses the difference in the energy values in subsequent optimization steps, rather than the energy values themselves, to guide the optimizer towards ground state. Hence, even with errors, the optimizer can converge to a value representing a potential ground state. The noise resilience feature of NISQ algorithms leads to an appreciable level of error mitigation in variational quantum circuits hence avoiding the need for full error corrections. As a result, one can directly use the physical qubits to encode the computation while keeping the qubit count low and avoiding the logical encoding altogether. Furthermore, for variational algorithms low circuit depths are sufficient, keeping the total qubit count as low as possible. These features make NISQ algorithms best hope to achieve quantum advantage in the near term.
Despite providing a strong possibility of quantum advantage in the near term, the NISQ architecture faces various challenges. These include hardware noise and efficiency and trainability of the algorithms. For example, the existence of Barren plateau (BP) may destroy all hope of achieving a quantum advantage. Furthermore, while being inherently noise resilient, the hardware noise can slow down the convergence rate and sometimes even avoid the noise-free global minimum altogether. Noise-resilience is not well understood beyond a 100 qubit system and a small set of problems. A possible strategy to strengthen noise resilience would be to use deeper quantum circuits, but that would increase the qubit requirements and at the same time may increase the chances of encountering BPs. Certain classes of problem, though implementable on NISQ architecture, might require an exponentially large number of samples which defeats the purpose of using a quantum device in the first place. With an increasing number of qubits the errors increase which, for hundreds of qubits, might become more than what the error mitigation techniques might handle.
We will need to find a sweet spot, in the trade-off amongst errors, noise, efficiency and qubit counts, where quantum advantage can be achieved in near term. Current development on both the quantum software and hardware fronts to mitigate errors and improve noise-resilience provide promising avenues to realizing quantum advantage with the near term devices. With a 200-300 qubit system to become available and the developments of better noise resilience strategies, it is expected that a case of quantum advantage can be achieved in the next 2-5 years..
October 31, 2021



Comments