16709

What Are the Statistical Limits of Offline Reinforcement Learning With Function Approximation?

APA

(2020). What Are the Statistical Limits of Offline Reinforcement Learning With Function Approximation?. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/lower-bounds-batch-reinforcement-learning

MLA

What Are the Statistical Limits of Offline Reinforcement Learning With Function Approximation?. The Simons Institute for the Theory of Computing, Oct. 30, 2020, https://simons.berkeley.edu/talks/lower-bounds-batch-reinforcement-learning

BibTex

          @misc{ scivideos_16709,
            doi = {},
            url = {https://simons.berkeley.edu/talks/lower-bounds-batch-reinforcement-learning},
            author = {},
            keywords = {},
            language = {en},
            title = {What Are the Statistical Limits of Offline Reinforcement Learning With Function Approximation?},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2020},
            month = {oct},
            note = {16709 see, \url{https://scivideos.org/index.php/Simons-Institute/16709}}
          }
          
Sham Kakade (University of Washington & Microsoft Research)
Talk number16709
Source RepositorySimons Institute

Abstract

The area of offline reinforcement learning seeks to utilize offline (observational) data to guide the learning of (causal) sequential decision making strategies.  The hope is that offline reinforcement learning coupled with function approximation methods (to deal with the curse of dimensionality) can provide a means to help alleviate the excessive sample complexity burden in modern sequential decision making problems. As such, the approach is becoming increasingly important to numerous areas of science, engineering, and technology. However, the extent to which this broader approach can be effective is not well understood, where the literature largely consists of sufficient conditions. This work focuses on the basic question of what are necessary representational and distributional conditions that permit provable sample-efficient offline reinforcement learning. Perhaps surprisingly, our main result shows that even if: i) we have realizability in that the true value function of _every_ policy is linear in a given set of features and 2) our off-policy data has good  coverage over all features (under a strong spectral condition), then any algorithm still (information-theoretically) requires a number of offline samples that is exponential in the problem horizon in order to non-trivially estimate the value of _any_ given policy. Our results highlight that sample-efficient, offline policy evaluation is simply not possible unless significantly stronger conditions hold; such conditions include either having low distribution shift (where the offline data distribution is close to the distribution of the policy to be evaluated) or significantly stronger representational conditions (beyond realizability). This is joint work with Ruosong Wang and Dean Foster.