22753

Learning to Control Safety-Critical Systems

APA

(2022). Learning to Control Safety-Critical Systems. The Simons Institute for the Theory of Computing. https://old.simons.berkeley.edu/node/22753

MLA

Learning to Control Safety-Critical Systems. The Simons Institute for the Theory of Computing, Oct. 14, 2022, https://old.simons.berkeley.edu/node/22753

BibTex

          @misc{ scivideos_22753,
            doi = {},
            url = {https://old.simons.berkeley.edu/node/22753},
            author = {},
            keywords = {},
            language = {en},
            title = {Learning to Control Safety-Critical Systems},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2022},
            month = {oct},
            note = {22753 see, \url{https://scivideos.org/index.php/simons-institute/22753}}
          }
          
Adam Wierman (California Institute of Technology)
Talk number22753
Source RepositorySimons Institute

Abstract

Making use of modern black-box AI tools such as deep reinforcement learning is potentially transformational for safety-critical systems such as data centers, the electricity grid, transportation, and beyond.   However, such machine-learned algorithms typically do not have formal guarantees on their worst-case performance, stability, or safety and are typically difficult to make use of in distributed, networked settings.  So, while their performance may improve upon traditional approaches in “typical” cases, they may perform arbitrarily worse in scenarios where the training examples are not representative due to, e.g., distribution shift, or in situations where global information is unavailable to local controllers. These represent significant drawbacks when considering the use of AI tools in safety-critical networked systems.  Thus, a challenging open question emerges: Is it possible to provide guarantees that allow black-box AI tools to be used in safety-critical applications?  In this talk, I will provide an overview of a variety of projects from my group that seek to develop robust and localizable tools combining model-free and model-based approaches to yield AI tools with formal guarantees on performance, stability, safety, and sample complexity.