16821

Reinforcement Learning using Generative Models for Continuous State and Action Space Systems

APA

(2020). Reinforcement Learning using Generative Models for Continuous State and Action Space Systems. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/tbd-241

MLA

Reinforcement Learning using Generative Models for Continuous State and Action Space Systems. The Simons Institute for the Theory of Computing, Dec. 01, 2020, https://simons.berkeley.edu/talks/tbd-241

BibTex

          @misc{ scivideos_16821,
            doi = {},
            url = {https://simons.berkeley.edu/talks/tbd-241},
            author = {},
            keywords = {},
            language = {en},
            title = {Reinforcement Learning using Generative Models for Continuous State and Action Space Systems},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2020},
            month = {dec},
            note = {16821 see, \url{https://scivideos.org/index.php/Simons-Institute/16821}}
          }
          
Rahul Jain (USC)
Talk number16821
Source RepositorySimons Institute

Abstract

Reinforcement Learning (RL) problems for continuous state and action space systems are among the most challenging in RL. Recently, deep reinforcement learning methods have been shown to be quite effective for certain RL problems in settings of very large/continuous state and action spaces. But such methods require extensive hyper-parameter tuning, huge amount of data, and come with no performance guarantees. We note that such methods are mostly trained `offline’ on experience replay buffers. In this talk, I will describe a series of simple reinforcement learning schemes for various settings. Our premise is that we have access to a generative model that can give us simulated samples of the next state.  I will introduce the RANDPOL (randomized function approximation for policy iteration) algorithm, an empirical actor-critic algorithm that uses randomized neural networks that can successfully solve a tough robotic problem with continuous state and action spaces. We also provide theoretical performance guarantees for the algorithm. Specifically, it allows for arbitrarily good approximation with high probability for any problem. I will also touch upon the probabilistic contraction analysis framework of iterative stochastic algorithms that underpins the theoretical analysis. This talk is based on work with Hiteshi Sharma (Microsoft).