18841

Graphon Neural Networks and the Transferability of Graph Neural Networks

APA

(2021). Graphon Neural Networks and the Transferability of Graph Neural Networks. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/graphon-neural-networks-and-transferability-graph-neural-networks

MLA

Graphon Neural Networks and the Transferability of Graph Neural Networks. The Simons Institute for the Theory of Computing, Dec. 06, 2021, https://simons.berkeley.edu/talks/graphon-neural-networks-and-transferability-graph-neural-networks

BibTex

          @misc{ scivideos_18841,
            doi = {},
            url = {https://simons.berkeley.edu/talks/graphon-neural-networks-and-transferability-graph-neural-networks},
            author = {},
            keywords = {},
            language = {en},
            title = {Graphon Neural Networks and the Transferability of Graph Neural Networks},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2021},
            month = {dec},
            note = {18841 see, \url{https://scivideos.org/index.php/Simons-Institute/18841}}
          }
          
Luiz Chamon (UC Berkeley)
Talk number18841
Source RepositorySimons Institute

Abstract

Graph neural networks (GNNs) generalize convolutional neural networks (CNNs) by using graph convolutions that enable information extraction from non-Euclidian domains, e.g., network data. These graph convolutions combine information from adjacent nodes using coefficients that are shared across all nodes. Since these coefficients do not depend on the graph, one can envision using the same coefficients to define a GNN on a different graph. In this talk, I will tackle this problem by introducing graphon NNs as limit objects of sequences of GNNs and characterizing the difference between the output of a GNN and its limit graphon-NN. This bound vanishes as the number of nodes grows as long as the graph convolutional filters are bandlimited in the graph spectral domain. This establishes a tradeoff between discriminability and transferability of GNNs and sheds light on the effect of training using smaller (possibly corrupted) graph convolutions.