16836

Convergence and Sample Complexity of Gradient Methods for the Model-Free Linear Quadratic Regulator Problem

APA

(2020). Convergence and Sample Complexity of Gradient Methods for the Model-Free Linear Quadratic Regulator Problem. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/tbd-255

MLA

Convergence and Sample Complexity of Gradient Methods for the Model-Free Linear Quadratic Regulator Problem. The Simons Institute for the Theory of Computing, Dec. 04, 2020, https://simons.berkeley.edu/talks/tbd-255

BibTex

          @misc{ scivideos_16836,
            doi = {},
            url = {https://simons.berkeley.edu/talks/tbd-255},
            author = {},
            keywords = {},
            language = {en},
            title = {Convergence and Sample Complexity of Gradient Methods for the Model-Free Linear Quadratic Regulator Problem},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2020},
            month = {dec},
            note = {16836 see, \url{https://scivideos.org/index.php/Simons-Institute/16836}}
          }
          
Mihailo Jovanovic (USC)
Talk number16836
Source RepositorySimons Institute

Abstract

Model-free reinforcement learning attempts to find an optimal control action for an unknown dynamical system by directly searching over the parameter space of controllers. The convergence behavior and statistical properties of these approaches are often poorly understood because of the nonconvex nature of the underlying optimization problems and the lack of exact gradient computation. In this talk, we discuss performance and efficiency of such methods by focusing on the standard infinite-horizon linear quadratic regulator  problem for continuous-time systems with unknown state-space parameters. We establish exponential stability for the ordinary differential equation (ODE) that governs the gradient-flow dynamics over the set of stabilizing feedback gains and show that a similar result holds for the gradient descent method that arises from the forward Euler discretization of the corresponding ODE. We also provide theoretical bounds on the convergence rate and sample complexity of the random search method with two-point gradient estimates. We prove that the required simulation time for achieving $\epsilon$-accuracy in the model-free setup and the total number of function evaluations both scale as $\log (1/\epsilon)$.