16828

Batch Policy Learning in Average Reward Markov Decision Processes

APA

(2020). Batch Policy Learning in Average Reward Markov Decision Processes. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/tbd-247

MLA

Batch Policy Learning in Average Reward Markov Decision Processes. The Simons Institute for the Theory of Computing, Dec. 03, 2020, https://simons.berkeley.edu/talks/tbd-247

BibTex

          @misc{ scivideos_16828,
            doi = {},
            url = {https://simons.berkeley.edu/talks/tbd-247},
            author = {},
            keywords = {},
            language = {en},
            title = {Batch Policy Learning in Average Reward Markov Decision Processes},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2020},
            month = {dec},
            note = {16828 see, \url{https://scivideos.org/Simons-Institute/16828}}
          }
          
Peng Liao (Harvard)
Talk number16828
Source RepositorySimons Institute

Abstract

We consider the batch (off-line) policy learning problem in the infinite horizon Markov Decision Process. Motivated by mobile health applications, we focus on learning a policy that  maximizes the long-term average reward. We propose a doubly robust estimator for the average reward for a given policy and show that it achieves semi-parametric efficiency. The proposed estimator requires estimation of two policy dependent nuisance functions.  We develop an optimization algorithm to compute the optimal policy in a parameterized stochastic policy class. The performance of the estimated policy is measured by the regret, i.e., the difference between the optimal average reward in the policy class and the average reward of the estimated policy and we establish a finite-sample regret guarantee for our proposed method.