16886

Small Covers for Near-Zero Sets of Polynomials and Learning Latent Variable Models

APA

(2020). Small Covers for Near-Zero Sets of Polynomials and Learning Latent Variable Models. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/tbd-260

MLA

Small Covers for Near-Zero Sets of Polynomials and Learning Latent Variable Models. The Simons Institute for the Theory of Computing, Dec. 18, 2020, https://simons.berkeley.edu/talks/tbd-260

BibTex

          @misc{ scivideos_16886,
            doi = {},
            url = {https://simons.berkeley.edu/talks/tbd-260},
            author = {},
            keywords = {},
            language = {en},
            title = {Small Covers for Near-Zero Sets of Polynomials and Learning Latent Variable Models},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2020},
            month = {dec},
            note = {16886 see, \url{https://scivideos.org/Simons-Institute/16886}}
          }
          
Daniel Kane (UCSD)
Talk number16886
Source RepositorySimons Institute

Abstract

We present a new technique for learning latent variable models with many parameters. The basic idea is to use the method of moments to find a large vector space of polynomials which nearly vanish on all of the parameters of the problem and to use this along with a new structural result to find a small cover of the set of relevant parameters after which exhaustive algorithms can be applied. Using this technique we develop substantially improved algorithms for learning mixtures of spherical Gaussians, mixtures of linear regressions and non-negative linear combinations of ReLUs.