22705

What Functions Do Transformers Prefer to Represent?

APA

(2022). What Functions Do Transformers Prefer to Represent?. The Simons Institute for the Theory of Computing. https://old.simons.berkeley.edu/talks/stochastic-optimization-under-distributional-drift

MLA

What Functions Do Transformers Prefer to Represent?. The Simons Institute for the Theory of Computing, Oct. 07, 2022, https://old.simons.berkeley.edu/talks/stochastic-optimization-under-distributional-drift

BibTex

          @misc{ scivideos_22705,
            doi = {},
            url = {https://old.simons.berkeley.edu/talks/stochastic-optimization-under-distributional-drift},
            author = {},
            keywords = {},
            language = {en},
            title = {What Functions Do Transformers Prefer to Represent?},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2022},
            month = {oct},
            note = {22705 see, \url{https://scivideos.org/simons-institute/22705}}
          }
          
Surbhi Goel (Microsoft Research and University of Pennsylvania)
Talk number22705
Source RepositorySimons Institute

Abstract

Over the past few years, Transformers have revolutionized deep learning, leading to advances in natural language processing and beyond. These models discard recurrence and convolutions, in favor of "self-attention" which directly and globally models interactions within the input context. Despite their success, currently there is limited understanding of why they work. In this talk, I will present our recent results on rigorously quantifying the statistical and representational properties of Transformers which shed light on their ability to capture long range dependencies efficiently. First, I will show how bounded-norm self-attention layers can represent arbitrary sparse functions of the input sequence, with sample complexity scaling only logarithmically with the context length, akin to sparse regression. Subsequently, I will briefly show how this ability of self-attention to compute sparse functions along with its ability to compute averages can be used to construct Transformers that exactly replicate the dynamics of a recurrent model of computation depth $T$ with only $o(T)$ depth. I will conclude the talk with experimental results on synthetic tasks based on learning Boolean functions and automata. Based on joint works with Jordan T. Ash, Ben L. Edelman, Sham M. Kakade, Akshay Krishnamurthy, Bingbin Liu, and Cyril Zhang.