18839

Learning Staircases

APA

(2021). Learning Staircases . The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/learning-staircases

MLA

Learning Staircases . The Simons Institute for the Theory of Computing, Dec. 07, 2021, https://simons.berkeley.edu/talks/learning-staircases

BibTex

          @misc{ scivideos_18839,
            doi = {},
            url = {https://simons.berkeley.edu/talks/learning-staircases},
            author = {},
            keywords = {},
            language = {en},
            title = {Learning Staircases },
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2021},
            month = {dec},
            note = {18839 see, \url{https://scivideos.org/Simons-Institute/18839}}
          }
          
Emmanuel Abbe (École polytechnique fédérale de Lausanne), Enric Boix (MIT), Theodor Misiakiewicz (Stanford University)
Talk number18839
Source RepositorySimons Institute

Abstract

It is known that arbitrary poly-size neural networks trained by GD/SGD can learn in class in SQ/PAC. This is however not expected to hold for more regular architectures and initializations. Recently, the staircase property emerged as a condition that seems both necessary and sufficient for certain regular networks to learn with high accuracy, with the positive result established for sparse homogeneous initializations. In this talk, we show that standard two-layer architectures can also learn staircases with features being learned over time. It is also shown that kernels cannot learn staircases of growing degree. Joint work with Enric Boix-Adsera and Theodor Misiakiewicz.