18674

On the Cryptographic Hardness of Learning Single Periodic Neurons

APA

(2021). On the Cryptographic Hardness of Learning Single Periodic Neurons. The Simons Institute for the Theory of Computing. https://simons.berkeley.edu/talks/cryptographic-hardness-learning-single-periodic-neurons

MLA

On the Cryptographic Hardness of Learning Single Periodic Neurons. The Simons Institute for the Theory of Computing, Nov. 10, 2021, https://simons.berkeley.edu/talks/cryptographic-hardness-learning-single-periodic-neurons

BibTex

          @misc{ scivideos_18674,
            doi = {},
            url = {https://simons.berkeley.edu/talks/cryptographic-hardness-learning-single-periodic-neurons},
            author = {},
            keywords = {},
            language = {en},
            title = {On the Cryptographic Hardness of Learning Single Periodic Neurons},
            publisher = {The Simons Institute for the Theory of Computing},
            year = {2021},
            month = {nov},
            note = {18674 see, \url{https://scivideos.org/Simons-Institute/18674}}
          }
          
Ilias Zadik (Massachusetts Institute of Technology)
Talk number18674
Source RepositorySimons Institute

Abstract

In this talk, I will present a simple reduction that demonstrates the cryptographic hardness of learning a single periodic neuron over isotropic Gaussian distributions in the presence of noise. The proposed "hard" family of functions, which are well-approximated by one-layer neural networks, take the general form of a univariate periodic function applied to an affine projection of the data. These functions have appeared in previous seminal works which demonstrate their hardness against gradient-based (GD) methods (Shamir'18), and Statistical Query (SQ) algorithms (Song et al.'17). Our result shows that if (polynomially) small noise is added to the labels, the intractability of learning these functions applies to all polynomial-time algorithms, beyond gradient-based and SQ algorithms, under the aforementioned cryptographic assumptions. Furthermore, we show that for exponentially small noise a polynomial-time algorithm based on lattice basis reduction can bypass the SQ and GD hardness. This is based on joint work with Min Jae Song and Joan Bruna.