18852

Representation Costs of Linear Neural Networks: Analysis and Design

APA

(2021). Representation Costs of Linear Neural Networks: Analysis and Design. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/tba-154

MLA

Representation Costs of Linear Neural Networks: Analysis and Design. The Simons Institute for the Theory of Computing, Dec. 07, 2021, https://simons.berkeley.edu/talks/tba-154

BibTex

          @misc{ scivideos_18852,
            doi = {},
            url = {https://simons.berkeley.edu/talks/tba-154},
            author = {},
            keywords = {},
            language = {en},
            title = {Representation Costs of Linear Neural Networks: Analysis and Design},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2021},
            month = {dec},
            note = {18852 see, \url{https://scivideos.org/index.php/Simons-Institute/18852}}
          }
          
Mina Karzand (University of California, Davis)
Talk number18852
Source RepositorySimons Institute

Abstract

For different parameterizations (mappings from parameters to predictors), we study the regularization cost in predictor space induced by l_2 regularization on the parameters (weights).  We focus on linear neural networks as parameterizations of linear predictors and identify the representation cost of certain sparse linear ConvNets and residual networks.  In order to get a better understanding of how the architecture and parameterization affect the representation cost, we also study the reverse problem, identifying which regularizers on linear predictors (e.g., l_p quasi-norms, group quasi-norms, the k-support-norm, elastic net) can be the representation cost induced by simple l_2 regularization, and designing the parameterizations that do so.