16881

Learning Some Ill-Conditioned Gaussian Graphical Models

APA

(2020). Learning Some Ill-Conditioned Gaussian Graphical Models. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/learning-some-ill-conditioned-gaussian-graphical-models

MLA

Learning Some Ill-Conditioned Gaussian Graphical Models. The Simons Institute for the Theory of Computing, Dec. 17, 2020, https://simons.berkeley.edu/talks/learning-some-ill-conditioned-gaussian-graphical-models

BibTex

          @misc{ scivideos_16881,
            doi = {},
            url = {https://simons.berkeley.edu/talks/learning-some-ill-conditioned-gaussian-graphical-models},
            author = {},
            keywords = {},
            language = {en},
            title = {Learning Some Ill-Conditioned Gaussian Graphical Models},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2020},
            month = {dec},
            note = {16881 see, \url{https://scivideos.org/index.php/Simons-Institute/16881}}
          }
          
Raghu Meka (UCLA)
Talk number16881
Source RepositorySimons Institute

Abstract

Gaussian Graphical models have wide-ranging applications in machine learning and the natural and social sciences where they are one of the most popular ways to model statistical relationships between observed variables. In most of the settings in which they are applied, the number of observed samples is much smaller than the dimension and the goal is to learn the model assuming the underlying model is sparse. While there are a variety of algorithms (e.g. Graphical Lasso, CLIME) that provably recover the graph structure with a logarithmic number of samples, they assume various conditions that require the precision matrix to be in some sense well-conditioned. Here we give the first fixed polynomial-time algorithms for learning attractive GGMs and walk-summable GGMs with a logarithmic number of samples without any such assumptions. In particular, our algorithms can tolerate strong dependencies among the variables. We complement our results with experiments showing that many existing algorithms fail even in some simple settings where there are long dependency chains.