Video URL
https://pirsa.org/20060019Reinforcement Learning assisted Quantum Optimization
APA
Wauters, M. (2020). Reinforcement Learning assisted Quantum Optimization. Perimeter Institute for Theoretical Physics. https://pirsa.org/20060019
MLA
Wauters, Matteo. Reinforcement Learning assisted Quantum Optimization. Perimeter Institute for Theoretical Physics, Jun. 05, 2020, https://pirsa.org/20060019
BibTex
@misc{ scivideos_PIRSA:20060019, doi = {10.48660/20060019}, url = {https://pirsa.org/20060019}, author = {Wauters, Matteo}, keywords = {Quantum Matter, Other Physics}, language = {en}, title = {Reinforcement Learning assisted Quantum Optimization}, publisher = {Perimeter Institute for Theoretical Physics}, year = {2020}, month = {jun}, note = {PIRSA:20060019 see, \url{https://scivideos.org/index.php/pirsa/20060019}} }
Matteo Wauters SISSA International School for Advanced Studies
Abstract
We propose a reinforcement learning (RL) scheme for feedback quantum control within the quantum approximate optimization algorithm (QAOA). QAOA requires a variational minimization for states constructed by applying a sequence of unitary operators, depending on parameters living in a highly dimensional space. We reformulate such a minimum search as a learning task, where a RL agent chooses the control parameters for the unitaries, given partial information on the system. We show that our RL scheme learns a policy converging to the optimal adiabatic solution for QAOA found by Mbeng et al. arXiv:1906.08948 for the translationally invariant quantum Ising chain. In presence of disorder, we show that our RL scheme allows the training part to be performed on small samples, and transferred successfully on larger systems. Finally, we discuss QAOA on the p-spsin model and how its robustness is enhanced by reinforce learning. Despite the possibility of finding the ground state with polynomial resources even in the presence of a first order phase transition, local optimizations in the p-spsin model suffer from the presence of many minima in the energy landscape. RL helps to find regular solutions that can be generalized to larger systems and make the optimization less sensitive to noise.
References