18847

Exact Asymptotics and Universality for Gradient Flows and Empirical Risk Minimizers

APA

(2021). Exact Asymptotics and Universality for Gradient Flows and Empirical Risk Minimizers. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/tba-152

MLA

Exact Asymptotics and Universality for Gradient Flows and Empirical Risk Minimizers. The Simons Institute for the Theory of Computing, Dec. 07, 2021, https://simons.berkeley.edu/talks/tba-152

BibTex

          @misc{ scivideos_18847,
            doi = {},
            url = {https://simons.berkeley.edu/talks/tba-152},
            author = {},
            keywords = {},
            language = {en},
            title = {Exact Asymptotics and Universality for Gradient Flows and Empirical Risk Minimizers},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2021},
            month = {dec},
            note = {18847 see, \url{https://scivideos.org/Simons-Institute/18847}}
          }
          
Andrea Montanari (Stanford University)
Talk number18847
Source RepositorySimons Institute

Abstract

We consider a class of supervised learning problems whereby we are given n data points (y_i,x_i), with x_i a d-dimensional dfeature factor, y_i a response, and the model is parametrized by a vector of dimension kd. We consider the high-dimensional asymptotics in which n,d diverge, with n/d and k of order one. As a special case, this class of models includes neural networks with k hidden neurons. I will present two sets of results: 1. Universality of certain properties of empirical risk minimizers with respect to the distribution of the feature vectors x_i. 2. A sharp asymptotic characterization of gradient flow in terms of a one-dimensional stochastic process. [Based on joint work with Michael Celentano, Chen Cheng, Basil Saeed]