18849

Universality of Neural Networks

APA

(2021). Universality of Neural Networks. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/universality-neural-networks

MLA

Universality of Neural Networks. The Simons Institute for the Theory of Computing, Dec. 07, 2021, https://simons.berkeley.edu/talks/universality-neural-networks

BibTex

          @misc{ scivideos_18849,
            doi = {},
            url = {https://simons.berkeley.edu/talks/universality-neural-networks},
            author = {},
            keywords = {},
            language = {en},
            title = {Universality of Neural Networks},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2021},
            month = {dec},
            note = {18849 see, \url{https://scivideos.org/index.php/Simons-Institute/18849}}
          }
          
Dan Mikulincer (MIT)
Talk number18849
Source RepositorySimons Institute

Abstract

It is well known that, at a random initialization, as their width approaches infinity, neural networks can be well approximated by Gaussian processes. We quantify this phenomenon by providing non-asymptotic convergence rates in the space of continuous functions. In the process, we study the Central Limit Theorem in high and infinite dimensions, as well as anti-concentration properties of polynomials with random variables