16832

Stable Reinforcement Learning with Unbounded State Space

APA

(2020). Stable Reinforcement Learning with Unbounded State Space. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/tbd-251

MLA

Stable Reinforcement Learning with Unbounded State Space. The Simons Institute for the Theory of Computing, Dec. 04, 2020, https://simons.berkeley.edu/talks/tbd-251

BibTex

          @misc{ scivideos_16832,
            doi = {},
            url = {https://simons.berkeley.edu/talks/tbd-251},
            author = {},
            keywords = {},
            language = {en},
            title = {Stable Reinforcement Learning with Unbounded State Space},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2020},
            month = {dec},
            note = {16832 see, \url{https://scivideos.org/index.php/Simons-Institute/16832}}
          }
          
Qiaomin Xie (Cornell)
Talk number16832
Source RepositorySimons Institute

Abstract

We consider the problem of reinforcement learning (RL) with unbounded state space, motivated by the classical problem of scheduling in a queueing network. Traditional policies as well as error metric that are designed for finite, bounded or compact state space, require infinite samples for providing any meaningful performance guarantee (e.g. ℓ_∞ error) for unbounded state space. We need a new notion of performance metric. Inspired by the literature in queuing systems and control theory, we propose stability as the notion of “goodness”: the state dynamics under the policy should remain in a bounded region with high probability. As a proof of concept, we propose an RL policy using Sparse-Sampling-based Monte Carlo Oracle and argue that it satisfies the stability property as long as the system dynamics under the optimal policy respects a Lyapunov function. The assumption of existence of a Lyapunov function is not restrictive as it is equivalent to the positive recurrence or stability property of any Markov chain. And, our policy does not utilize the knowledge of the specific Lyapunov function. To make our method sample efficient, we provide an improved, sample efficient Monte Carlo Oracle with Lipschitz value. We also design an adaptive version of the algorithm, based on carefully constructed statistical tests, which finds the correct tuning parameter automatically. This work is joint with Devavrat Shah and Zhi Xu.