Quantum Algorithms Outlook 2023 - Long Summary for QCR Paid Subscribers
- QCR by GQI

- Jan 14, 2023
- 10 min read
The full report is available for purchase on the GQI website at this link CLICK HERE
We do now have real quantum applications, though these are currently modest compared to our expectations. As early adopter interest grows, how are the prospects for commercially relevant algorithms shaping up? We examine powerful challenges, and also powerful new opportunities.
The strategic algorithms playing field
Quantum computers are set to be much slower than conventional computers when measured on just their clock speed. The real advantage they offer comes from the unique quantum algorithms they promise to run. Much hype has surrounded the claims and expectations for what such algorithms can achieve.
Some quantum algorithms offer remarkable (exponential) speedups, others more modest (quadratic) speedups. Some are heuristics whose benefits must be evaluated by trial and error.
It is not possible to divorce algorithms from the specs of the hardware on which they are intended to run. Academics have been developing quantum algorithms for 30 years. Early quantum hardware varied from non-existent to very poor. However, it was known that beyond a certain fidelity threshold gate-model architectures could in principle use quantum error correction to turn noisy physical qubits into better logical qubits. It was therefore natural for algorithm theorists to focus on the abstraction of fault tolerant quantum computation (FTQC).
The most famous gate-model quantum primitives all date from this era: the quantum fourier transform, (QFT) used, for example, in Shor’s algorithm for factoring, quantum phase estimation (QPE) used, for example, in the HHL algorithm for linear algebra and in quantum simulation; amplitude amplification (AA) used, for example, in Grover’s algorithm for unstructured search and discrete time quantum walks for Monte Carlo.
QFT and QPE typically offer an exponential advantage though sometimes with important caveats. AE typically offers a quadratic advantage, also typically with caveats. These primitives typically require implementation on the high-depth quantum circuits implicitly provided by FTQC.
More recently it has been shown that all of these primitives can be understood as arising from one unified framework, a grand unified quantum primitive - the quantum singular value transformation (QSVT).
When physical devices improved to the point that they were able to demonstrate they had some ‘beyond classical’ capabilities, a new focus naturally developed: what might we be able to do directly with these noisy intermediate scale quantum (NISQ) devices?
For gate-model approaches limited fidelities mean that high-depth quantum circuits are not feasible. Variational quantum algorithms (VQAs) have emerged as the most promising body of techniques aimed at NISQ devices, combining low-depth quantum circuits with classical processing in an iterative loop.
Quantum annealing (QA) can be seen as a NISQ era implementation of adiabatic quantum computing, an alternative to gate-model computation.
These techniques are heuristics. In contrast to the full scale techniques described above, we have no guarantee of their accuracy, performance or the speedup they may offer.
As developer roadmaps now also increasingly project forward to the introduction of fault tolerant techniques, it is increasingly clear that early FTQC devices still will not offer the specs to run all full-scale FTQC algorithms. A focus on algorithms suitable for this new middle ground is an increasingly important focus.
For a review of recent hardware developments read Quantum Hardware Outlook 2023.
The NISQ era
Some look for early NISQ devices to step-up and support commercially useful applications in the next 2-5 years.
Opinions differ on requirements. Some believe quantum annealing is already starting to do this. GQI believe gate-model protagonists are typically targeting:
100-200+ physical qubits
99.99%+ fidelities, especially 2Q gate fidelity
High qubit connectivity
Advanced error mitigation techniques are often envisaged to squeeze more from these circuits. However, we need to be careful not to double count the benefits built-in to the top end of the hardware vendor's stack, the optimisations provided by the software platform’s compiler and clever tricks an algorithm developer may be planning to exploit.
It is important to realize that there is no guarantee that quantum algorithms can deliver on their part of this bargain. Classical algorithms are a fierce competitor. Niche applications certainly look possible but broad quantum adoption looks much less likely without a further breakthrough.
A low-depth algorithm breakthrough could still yield broad NISQ quantum advantage. However without further progress on algorithms the alternative could be a hard quantum winter. Making quantum software startup financial plans robust to both these scenarios is a challenge.
Early FTQC
Others focus on the potential that will be realized by larger machines that deliver FTQC. However, early FTQC machines will still be limited in their capabilities.
With standard error correction techniques overheads are very high. A system with 1M physical qubits might offer only 200-300 truly high performance logical qubits, though better fidelities or better connectivity could radically improve this.
If we believe that multiple of today’s vendors can deliver on their announced roadmaps we will see broad early FTQC in 5-10 years. Many do not see things playing out so smoothly. Scaling up is hard and some believe only their approach has what it takes to deliver eye of the needle FTQC.
Early FTQC devices will still have
A limited number of logical qubits
Certain controlled operations will be difficult (e.g. multi-qubit control gates)
It will still be desirable to limit maximum logical circuit depth. Logical clock speeds will be slow (perhaps very slow).
Significant innovation remains ongoing to adapt established FTQC algorithms to fit within these constraints. Developers need to minimize the requirement for ancilla qubits. They need to use classical processing and multiple runs to reduce circuit depth (even at the expense of higher notional runtime). A focus on the precision required in a particular application will likely be important (and what parts of the algorithm need that full precision).
High overheads and slow logical clock speeds compared to classical computers will mean that algorithms will need to offer a significant speedup in order to offer a practical advantage in real world applications.
High polynomial, or preferably superpolynomial or exponential speedups will be essential.
The longer term
The longer term landscape will be shaped by developments that are even more uncertain.
Goliath FTQC - the direct future path would simply be brute force scaling of current hardware architectures. In the end we want 10,000+ logical qubits. Large device footprints could lead to truly immense machines, putting pressure on affordability.
Distributed FTQC - the emergence of quantum enabled networks provides new niche opportunities. In the end, an entanglement based Quantum Internet could potentially allow a literally exponential multiplication of quantum resources.
Turbo FTQC - to allow a fuller range of quantum speedups to become viable we would ideally like a new architecture with faster logical clock speeds, low overheads and capabilities such as a hardware efficient implementation of QRAM. No one is yet building such an architecture, but it is reasonable to believe that someday someone will.
FQQC opportunities
Not all applications require lots of qubits. ‘Few qubit’ quantum computing (FQQC) devices may find a variety of use, particularly in network related applications.
Cryptographic applications may provide an early opportunity in quantum enabled networks. Some see helping to join up a future entanglement based Quantum Internet as the biggest opportunity of all.
For some quantum algorithm players, a presence in FQQC may be a stepping stone to wider networked enabled applications.
Key challenges
Today’s quantum hardware does not yet close the gap with the specs that quantum algorithm developers want to target. Even with currently anticipated hardware improvements, progress on algorithms is still needed to bring them into alignment with real world use cases on the timescales the sector, and its investors, commonly assume. Such work is being very actively pursued.
Looking across recent progress, GQI believes the key questions fall along important time horizons:
2-5 years - what is realistically achievable with NISQ era hardware? Optimism is reasonable for specific use cases; but broad quantum advantage looks challenging.
5-10 years - what might be addressed with early FTQC devices? Proven applications are more modest than we might hope; but this is a relatively new segment. We can expect further innovation. Quantum resource estimation is key.
In the long term - is the hype around quantum computing justified? GQI continues to believe that the long-term prognosis is bright.
Early applications
Early applications do now exist, though genuine business deployments remain rare. Quantum annealing has made progress, but as a heuristic must still face stiff classical competition. Verifiable random numbers are a modest application in their current form but interest is growing and there is potential for such services to develop additional unique features in the long term.
Feynman's dream is already being realized - early devices are already simulating novel physics. Such ‘analog’ quantum computing has the potential to be adapted for more general business problems if the right business model can be found. While for gate-model devices variational quantum algorithms should not be discounted. However, these remain heuristic approaches with the challenges that implies.
Quantum advantage revisited
We now know that early ‘quantum advantage’ demonstrations weren’t quite what they seemed. Recent breakthrough results for classical algorithms show that noisy ‘random circuit sampling’ experiments do not really demonstrate a scalable route to ‘beyond classical’ calculations. This may combine with the unwinding of some of the excesses of quantum hype to dampen enthusiasm, but the true implications are more nuanced.
The concept of exponential quantum advantage remains important. We do still have evidence from ‘Gaussian boson sampling’ of ‘beyond classical results’. Importantly, early quantum devices across multiple qubit hardware types do genuinely seem to be accessing the uniquely large computational space promised by quantum computing’s proponents.
Complexity theory continues to offer far reaching support to guide the development of the sector. The formal definition of a NISQ complexity class is bringing new theoretical ideas to guide activity in this segment. Recent proof of the NLTS conjecture means that we still have no reason to doubt that quantum computation is as hard to solve ‘approximately’ as it is to solve ‘exactly’.
Cryptanalysis
Devices large enough to run the full scale quantum algorithms that will one day break for break current common asymmetric cryptographic protocols are still some way off. However, innovation on quantum algorithms remains a potential disrupter that could bring that day closer.
Recent demonstrations of heuristic approaches to factoring are still well short of posing a threat. However, assessments of the reasonable worst case date for ‘Q-day’ (or Y2Q) should not neglect the possibility that such algorithms might work on intermediate scale devices, or that new hardware or error correcting code architectures could disrupt scaling assumptions.
The new Yamakawa-Zhandry construction has demonstrated that our previous understanding of the limitations on quantum speedups was too broad. Exponential speedups are possible in a wider set of problems than we have often assumed. The new technique does not threaten any current practical protocol, but many will find it disconcerting to see it demonstrated in the context of breaking a symmetric cryptographic primitive (even if a hypothetical one).
Quantum simulation
For many this remains a major killer app for quantum computing.
For early material science applications, estimated quantum system requirements for the application of highly tailors variational quantum algorithms are in some select cases now close to what near term NISQ hardware may provide. Though what benefit they will deliver at this level of approximation is of course not clear.
New techniques are emerging that seek to take advantage of more capable future devices. These feature middle-ground heuristic techniques that draw inspiration from techniques in conventional quantum chemistry,
An exciting development has been the adaptation of full scale quantum algorithms (following the ideas of Lin-Tong) to better fit within the likely constraints of early FTQC while maintaining performance guarantees. This promises to bring a wider range of use cases within reach.
However, even for quantum chemistry we need to be careful not to overstate the power of known techniques. The ability to efficiently prepare a sufficiently good input state at the start of the calculation is an important but often neglected assumption. The existence of an end-to-end exponential advantage for generic quantum chemistry problems is not proven.
Optimisation
Optimisation problems are a particularly appealing target because of their ubiquitous nature in business.
Quantum heuristic techniques are still working to establish where they can systematically and robustly add value. The story of D-Wave’s multi-year push to get a QUBO problem into deployment illustrates the challenges of driving change. Only large demonstrated benefits typically drive business adoption.
Frustratingly, the best known full scale algorithms offer wide applicability but only a quadratic advantage (and often require QRAM). Better speedups than this are necessary to make such algorithms viable on currently envisaged architectures.
New work has also demonstrated that an exponential advantage is theoretically possible in the approximation of solutions of combinatorial optimization problems with the right underlying data structure. New hybrid techniques promise to bring improved (cubic or quartic) speedups to Monte Carlo simulations. Other areas where quantum walks may bring an exponential advantage is an open area of study.
Linear algebra & quantum machine learning
The unified QSVT framework has allowed a more systematic analysis of where quantum algorithms can be ‘dequantized’. Many generic speedups for common conventional machine learning techniques have been shown to fall into this category. It is important to note that many do still offer speedups, though high polynomial rather than exponential. Here the absence of proposals for efficient QRAM hardware may still be the most important roadblock for many applications.
In parallel, progress in quantum machine learning is targeting areas that promise to avoid the data input bottleneck. Quantum generative models avoid many of the pitfalls of other early approaches and are a leading contender for early quantum machine learning advantage. Quantum neural networks have shown promise in image classification applications. Topological data analysis is an interesting area that resists standard dequantization techniques. Quantum natural language processing stands out as an attractive long-term opportunity area. Still at an early stage of development, QNLP has a growing community of support
Exponential advantage is certainly possible in quantum machine learning where we can exploit suitable structure in the data. New work is throwing light on how group-theoretic concepts can be used to analyze where this may be found.
Insights from communication complexity theory are also shedding new light on the nature of quantum supremacy. The convergence of quantum computing, quantum communication and quantum sensing represents a very interesting future for the field.
Like many new technologies, quantum computing has experienced hyped expectations. Some of that is now bound to unwind as timelines become more realistic. How badly that affects the sector will depend crucially on the next steps in the story.
To watch in 2023
Quantum annealing - where is the quantum niche in a hybrid optimisation landscape?
Quantum randomness - what will be the competitive metrics for this early application?
Feynman’s dream - can someone find a way to make money out of physics simulation?
Quantum supremacy - watch out for bad press, but don’t be distracted.
NISQable - can we find an algorithm that solidly beats classical & is efficiently verifiable?
100x100 challenge - who will seek to tailor algorithms to IBM’s new NISQ spec?
VQE - will resource estimates start to address circuit iteration requirements?
QAOA - will progress on target problem classes identify a concrete application target?
Generative models - will we see movement beyond reference to specific applications?
Krylov QDS - what can this new approach bring within reach?
Lin-Tong - will we see more quantum resource estimates for CDF-QPE techniques?
Monte Carlo - can improved polynomial speedups be firmed-up for early FTQC?
QNPL - how will the roadmap of potential applications shape up?
Communication complexity - what applications exponential speedups here address?
Yamakawa-Zhandry - what wider applications might this unlock?
Group-theory - can this throw light on where exponential speedups can be found?
The role of structure - will we see a new conjecture to explain quantum speedup limits?
January 14, 2023



Comments