18673

Worst-Case Robustness in Machine Learning

APA

(2021). Worst-Case Robustness in Machine Learning. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/worst-case-robustness-machine-learning

MLA

Worst-Case Robustness in Machine Learning. The Simons Institute for the Theory of Computing, Nov. 10, 2021, https://simons.berkeley.edu/talks/worst-case-robustness-machine-learning

BibTex

          @misc{ scivideos_18673,
            doi = {},
            url = {https://simons.berkeley.edu/talks/worst-case-robustness-machine-learning},
            author = {},
            keywords = {},
            language = {en},
            title = {Worst-Case Robustness in Machine Learning},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2021},
            month = {nov},
            note = {18673 see, \url{https://scivideos.org/index.php/Simons-Institute/18673}}
          }
          
Aditi Raghunathan (Stanford)
Talk number18673
Source RepositorySimons Institute

Abstract

Current machine learning (ML) systems are remarkably brittle, raising serious concerns about their deployment in safety-critical applications like self-driving cars and predictive healthcare. In such applications, models could encounter test distributions that differ wildly from the training distributions. Trustworthy ML thus requires strong robustness guarantees from learning, including robustness to worst-case distribution shifts. Robustness to worst-case distribution shifts raises several computational and statistical challenges over ‘standard’ machine learning. In this talk, I will present two formal settings of worst-case distribution shifts motivated by adversarial attacks on test inputs and presence of spurious correlations like image backgrounds. Empirical observations demonstrate (i) an arms race between attacks and existing heuristic defenses necessitating provable guarantees much like cryptography (ii) increased sample complexity of robust learning (iii) resurgence of the need for regularization in robust learning. We capture each of these observations in simple theoretical models that nevertheless yield principled and scalable approaches to overcome the hurdles in robust learning, particularly via the use of unlabeled data.