16874

SGD Learns One-Layer Networks in WGANs

APA

(2020). SGD Learns One-Layer Networks in WGANs. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/sgd-learns-one-layer-networks-wgans

MLA

SGD Learns One-Layer Networks in WGANs. The Simons Institute for the Theory of Computing, Dec. 16, 2020, https://simons.berkeley.edu/talks/sgd-learns-one-layer-networks-wgans

BibTex

          @misc{ scivideos_16874,
            doi = {},
            url = {https://simons.berkeley.edu/talks/sgd-learns-one-layer-networks-wgans},
            author = {},
            keywords = {},
            language = {en},
            title = {SGD Learns One-Layer Networks in WGANs},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2020},
            month = {dec},
            note = {16874 see, \url{https://scivideos.org/index.php/Simons-Institute/16874}}
          }
          
Qi Lei (Princeton University)
Talk number16874
Source RepositorySimons Institute

Abstract

Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a min-max optimization problem to global optimality but are in practice successfully trained using stochastic gradient descent-ascent. In this talk, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution with polynomial time and sample complexity.