19659

Balancing Covariates In Randomized Experiments: The Gram--Schmidt Walk Design

APA

(2022). Balancing Covariates In Randomized Experiments: The Gram--Schmidt Walk Design. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/balancing-covariates-randomized-experiments-gram-schmidt-walk-design

MLA

Balancing Covariates In Randomized Experiments: The Gram--Schmidt Walk Design. The Simons Institute for the Theory of Computing, Feb. 15, 2022, https://simons.berkeley.edu/talks/balancing-covariates-randomized-experiments-gram-schmidt-walk-design

BibTex

          @misc{ scivideos_19659,
            doi = {},
            url = {https://simons.berkeley.edu/talks/balancing-covariates-randomized-experiments-gram-schmidt-walk-design},
            author = {},
            keywords = {},
            language = {en},
            title = {Balancing Covariates In Randomized Experiments: The Gram--Schmidt Walk Design},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2022},
            month = {feb},
            note = {19659 see, \url{https://scivideos.org/index.php/Simons-Institute/19659}}
          }
          
Christopher Harshaw (Yale University)
Talk number19659
Source RepositorySimons Institute

Abstract

The design of experiments involves an inescapable compromise between covariate balance and robustness. In this talk, we describe a formalization of this trade-off and introduce a new style of experimental design that allows experimenters to navigate it. The design is specified by a robustness parameter that bounds the worst-case mean squared error of an estimator of the average treatment effect. Subject to the experimenter’s desired level of robustness, the design aims to simultaneously balance all linear functions of potentially many covariates. The achieved level of balance is better than previously known possible, considerably better than what a fully random assignment would produce, and close to optimal given the desired level of robustness. We show that the mean squared error of the estimator is bounded by the minimum of the loss function of an implicit ridge regression of the potential outcomes on the covariates. The estimator does not itself conduct covariate adjustment, so one can interpret the approach as regression adjustment by design. Finally, we provide non-asymptotic tail bounds for the estimator, which facilitate the construction of conservative confidence intervals.