Lab Home | Phone | Search | ||||||||
|
||||||||
Analog quantum annealers have long been used to train a class of neural network models called Quantum Boltzmann Machines, but their future scalability and resistance to noise still remains in question. With the recent advances in circuit-model quantum computers, there are great hopes that these devices will be leveraged for machine learning applications in the near-term. In this talk we will present a classical-quantum hybrid algorithm to train Quantum Boltzmann Machines on near-term circuit model quantum computers. This algorithm relies on a method for approximate Gibbs sampling, which is achieved by variationally minimizing the free energy of the system. The free energy is minimized via the use of the Quantum Approximate Optimization Algorithm (QAOA) for energy minimization with a concurrent variational maximization of the Von Neumann entropy input into the system. By minimizing the Von Neumann free energy, we minimize an upper bound to the classical free energy, and thus achieve near-thermality. We demonstrate an implementation of our algorithm by training a Restricted Boltzmann Machine on a classically simulated noisy quantum computer. We show successful neural network training convergence for noise levels achievable in today’s quantum chips. Host: Patrick Coles |