19819

Nonparametric Density Estimation and Convergence of GANs under Besov IPMs

APA

(2022). Nonparametric Density Estimation and Convergence of GANs under Besov IPMs. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/nonparametric-density-estimation-and-convergence-gans-under-besov-ipms

MLA

Nonparametric Density Estimation and Convergence of GANs under Besov IPMs. The Simons Institute for the Theory of Computing, Feb. 11, 2022, https://simons.berkeley.edu/talks/nonparametric-density-estimation-and-convergence-gans-under-besov-ipms

BibTex

          @misc{ scivideos_19819,
            doi = {},
            url = {https://simons.berkeley.edu/talks/nonparametric-density-estimation-and-convergence-gans-under-besov-ipms},
            author = {},
            keywords = {},
            language = {en},
            title = {Nonparametric Density Estimation and Convergence of GANs under Besov IPMs},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2022},
            month = {feb},
            note = {19819 see, \url{https://scivideos.org/Simons-Institute/19819}}
          }
          
Ananya Uppal (University of Texas Austin)
Talk number19819
Source RepositorySimons Institute

Abstract

Since their introduction in Goodfellow et al. (2014) as sampling algorithms, Generative Adversarial Networks (GANs) have evolved to produce remarkable results in several tasks e.g. image generation, text-to-image translation, etc. Statistically, a GAN may be viewed as a density estimate constructed by optimizing over an Integral Probability Metric (IPM) encoded by its discriminator. I will present our work on estimating a nonparametric density under IPMs defined by Besov spaces. Such IPMs are a rich class of losses and include, e.g., Lp distances, the total variation distance, and generalizations of both the Wasserstein and the Kolmogorov-Smirnov distances. Our results generalize, unify, or improve several results, both recent and classical. Consequently, we imply bounds on the statistical error of a GAN, showing that GANs are minimax optimal and in some cases, strictly outperform the best linear estimator (e.g. the empirical estimator, kernel density estimator). Further, we study the above framework of nonparametric density estimation under the Huber contamination model, in which a proportion of the data comes from an unknown outlier distribution. We provide a minimax optimal estimator that adapts to both an unknown contamination proportion and the unknown smoothness of the true density. We use this to imply that certain GAN architectures are robustly minimax optimal.