16705

Representation Learning and Exploration in Reinforcement Learning

APA

(2020). Representation Learning and Exploration in Reinforcement Learning. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/representation-learning-and-exploration-reinforcement-learning

MLA

Representation Learning and Exploration in Reinforcement Learning. The Simons Institute for the Theory of Computing, Oct. 30, 2020, https://simons.berkeley.edu/talks/representation-learning-and-exploration-reinforcement-learning

BibTex

          @misc{ scivideos_16705,
            doi = {},
            url = {https://simons.berkeley.edu/talks/representation-learning-and-exploration-reinforcement-learning},
            author = {},
            keywords = {},
            language = {en},
            title = {Representation Learning and Exploration in Reinforcement Learning},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2020},
            month = {oct},
            note = {16705 see, \url{https://scivideos.org/index.php/Simons-Institute/16705}}
          }
          
Akshay Krishnamurthy (Microsoft Research)
Talk number16705
Source RepositorySimons Institute

Abstract

I will discuss new provably efficient algorithms for reinforcement in rich observation environments with arbitrarily large state spaces. Both algorithms operate by learning succinct representations of the environment, which they use in an exploration module to acquire new information. The first algorithm, called Homer, operates in a block MDP model and uses a contrastive learning objective to learn the representation. On the other hand, the second algorithm, called FLAMBE, operates in a much richer class of low rank MDPs and is model based. Both algorithms accommodate nonlinear function approximation and enjoy provable sample and computational efficiency guarantees.