Files

Abstract

Quantum computing not only holds the potential to solve long-standing problems in quantum physics, but also to offer speed-ups across a broad spectrum of other fields. Access to a computational space that incorporates quantum effects, such as superposition and entanglement, enables the derivation of promising quantum algorithms for important tasks, including preparing the ground state of a quantum system or predicting its evolution over time. Successfully tackling these tasks promises insights into significant theoretical and technological questions, such as superconductivity and the design of new materials. The aim of quantum algorithms is to use a series of quantum operations in a quantum circuit to solve a problem beyond the reach of classical computers. However, the noise and limited scale of current quantum computers restricts these circuits to moderate sizes and depths. As a result, many prominent algorithms are currently infeasible to run for problem sizes of practical interest. In response, recent research focused on variational quantum algorithms, which allow the selection of circuits that act within a quantum device's capabilities. Yet, these algorithms can require the execution of a large number of circuits, leading to prohibitively long computation times. This doctoral thesis develops two main techniques to reduce these quantum computational resource requirements, with the goal of scaling up application sizes on current quantum processors. The first approach is based on stochastic approximations of computationally costly quantities, such as quantum circuit gradients or the quantum geometric tensor (QGT). The second method takes a different perspective on the QGT, leading to a potentially more efficient description of time evolution on current quantum computers. Both techniques rely on maintaining available information and only computing necessary corrections, instead of re-computing possibly redundant data. The main focus of application for our algorithms is the simulation of quantum systems, broadly defined as including the preparation of ground and thermal states, and the real- and imaginary-time propagation of a system. The developed subroutines, however, can further be utilized in the fields of optimization or machine learning. Our algorithms are benchmarked on a range of representative models, such as Ising or Heisenberg spin models, both in numerical simulations and experiments on the hardware. In combination with error mitigation techniques, the latter is scaled up to 27 qubits; into a regime that variational quantum algorithms are challenging to scale to on noisy quantum computers without our algorithms.

Details

PDF