Format results
- Li-Yang Tan (Stanford University)
An Equivalence Between Private Classification and Online Prediction
Shay Moran (Technion)Spacetime and quantum theory: insights via quantum foundations
Marius Krumm Institute for Quantum Optics and Quantum Information (IQOQI) - Vienna
Hardness of Identity Testing for Potts models and RBMs
Antonio Blanca (Penn State)Near-Optimal Learning of Tree-Structured Distributions by Chow-Liu
Eric Price (University of Texas, Austin)Photon Emission from Circular Equatorial Orbiters around Kerr Black Holes
Delilah Gates Princeton University
Directed Graphical Models for Extreme Value Statistics
Ngoc Tran (University of Texas, Austin)Non-local quantum computation and holography
Alex May Perimeter Institute
Quantum spacetime and deformed relativistic kinematics
Javier Relancio Universidad de Zaragoza
Testing and Reconstruction via Decision Trees
Li-Yang Tan (Stanford University)We study sublinear and local computation algorithms for decision trees, focusing on the related tasks of testing and reconstruction. In testing, we would like to determine if an unknown function $f$ is close to a size-$s$ decision tree. In reconstruction, we are given query access to a function $f$ that is promised to be close to a size-$s$ decision tree, and we would like to provide "on-the-fly" query access to a decision tree, ideally of size not much larger than $s$, that is close to $f$. We give the first reconstruction algorithm for decision trees, and from it we derive a new tester for decision trees. By known relationships, our results yield reconstruction algorithms for numerous other boolean function properties---Fourier degree, randomized and quantum query complexities, certificate complexity, sensitivity, etc.---which in turn yield new testers for these properties. Joint work with Guy Blanc (Stanford) and Jane Lange (MIT).An Equivalence Between Private Classification and Online Prediction
Shay Moran (Technion)Let H be a concept class. We prove that H can be PAC-learned by an (approximate) differentially-private algorithm if and only if it has a finite Littlestone dimension. This implies an equivalence between online learnability and private PAC learnability.Spacetime and quantum theory: insights via quantum foundations
Marius Krumm Institute for Quantum Optics and Quantum Information (IQOQI) - Vienna
While spacetime and quantum theory are crucial parts of modern theoretical physics, the problem of quantum gravity demonstrates that their full relationship is not yet completely understood. In my talk, I report on two recent results that aim to shed light on this relationship via ideas and tools from quantum foundations.
We start with the setting of (semi-) device-independent quantum information protocols. In this scenario one considers abstract black boxes that are characterised by their input-output statistics. Typically, these inputs and outputs are assumed to be abstract labels from a finite set of integers. We replace the abstract inputs with physical inputs that correspond to continuous spatio-temporal degrees of freedom, e.g. angles of polarisers and time-durations of laser pulses. This framework gives new insights about the relation between space, time, and quantum correlations, and it gives rise to new kinds of Bell non-locality witnesses.
We then turn to the topic of quantum reference frames. Specifically, we consider a composite quantum system and an outside experimenter who does not have access to an external reference frame to specify all of the system's properties. We show that for such an observer the possible descriptions of states and observables are related by quantum reference frame transformations that have been independently proposed in recent works. We give an explicit description of the observables that are measurable by agents constrained by such quantum symmetries, and we introduce a relational generalisation of the partial trace that applies to such situations.Small Covers for Near-Zero Sets of Polynomials and Learning Latent Variable Models
Daniel Kane (UCSD)We present a new technique for learning latent variable models with many parameters. The basic idea is to use the method of moments to find a large vector space of polynomials which nearly vanish on all of the parameters of the problem and to use this along with a new structural result to find a small cover of the set of relevant parameters after which exhaustive algorithms can be applied. Using this technique we develop substantially improved algorithms for learning mixtures of spherical Gaussians, mixtures of linear regressions and non-negative linear combinations of ReLUs.Hardness of Identity Testing for Potts models and RBMs
Antonio Blanca (Penn State)We study the identity testing problem in the context of spin systems where it takes the following form: given the parameter specification of the model M and a sampling oracle for the distribution μ of an unknown model M*, can we efficiently determine if the two models M and M* are the same? We establish hardness results for this problem in two prototypical cases: the q-state ferromagnetic Potts model and Restricted Boltzmann Machines (RBMs). Daskalakis, Dikkala and Kamath (2018) presented a polynomial-time algorithm for identity testing for the ferromagnetic Ising model, which corresponds to the special case where there are only two spins (q=2). In this talk, I will present recent, complementary hardness results for the Potts model when q > 2 and for RBMs (i.e., mixed Ising models on bipartite graphs). Our proofs lay out a methodology to reduce approximate counting to identity testing, utilizing the phase transitions exhibited by the models and using random graphs as gadgets in the reduction. Based on joint work with Zongchen Chen, Daniel Stefankovic, and Eric Vigoda.Near-Optimal Learning of Tree-Structured Distributions by Chow-Liu
Eric Price (University of Texas, Austin)The classical Chow-Liu algorithm estimates a tree-structured graphical model of a distribution using the maximum spanning tree on the pairwise mutual information graph. We show that, if the true distribution P is tree-structured over a discrete domain Σ^n, then the Chow-Liu algorithm returns a distribution Q with D(P||Q) < ε after O~(Σ^3 n / ε) samples, which is nearly optimal in n and ε. Our analysis of Chow-Liu is based on a new result in conditional independence testing: we prove that for three random variables X,Y,Z each over Σ, testing if I(X;Y∣Z) is 0 or ≥ε is possible with O˜(Σ^3/ε) samples using the empirical mutual information.Photon Emission from Circular Equatorial Orbiters around Kerr Black Holes
Delilah Gates Princeton University
We consider monochromatic and isotropic photon emission from circular equatorial Kerr orbiters. We calculate the critical curve delineating the region of photon escape from that of photon capture in each emitter’s sky, allowing us to derive analytic expressions for the photon escape probability and the redshift-dependent total flux collected on the celestial sphere as a function of emission radius and black hole parameters. This critical curve generalizes to finite orbital radius the usual Kerr critical curve and displays interesting features in the limit of high spin. These results confirm that the near-horizon geometry of a high-spin black hole is in principle observable.
Learning Some Ill-Conditioned Gaussian Graphical Models
Raghu Meka (UCLA)Gaussian Graphical models have wide-ranging applications in machine learning and the natural and social sciences where they are one of the most popular ways to model statistical relationships between observed variables. In most of the settings in which they are applied, the number of observed samples is much smaller than the dimension and the goal is to learn the model assuming the underlying model is sparse. While there are a variety of algorithms (e.g. Graphical Lasso, CLIME) that provably recover the graph structure with a logarithmic number of samples, they assume various conditions that require the precision matrix to be in some sense well-conditioned. Here we give the first fixed polynomial-time algorithms for learning attractive GGMs and walk-summable GGMs with a logarithmic number of samples without any such assumptions. In particular, our algorithms can tolerate strong dependencies among the variables. We complement our results with experiments showing that many existing algorithms fail even in some simple settings where there are long dependency chains.Directed Graphical Models for Extreme Value Statistics
Ngoc Tran (University of Texas, Austin)Extreme value statistics concerns the maxima of random variables and relations between the tails of distributions rather than averages and correlations. The vast majority of models are centered around {max-stable distributions}, the Gaussian analogs for extremes. However, max-stable multivariate have an intractable likelihood, severely limiting statistical learning and inference. Directed graphical models for extreme values (aka max-linear Bayesian networks) have only appeared in 2018, but have seen many applications in finance, hydrology and extreme risks modelling. This talk (1) highlights how they differ from usual Bayesian networks, (2) discusses their connections to tropical convex geometry and k-SAT, (3) shows performances of current learning algorithms on various hydrology datasets, and (4) outlines major computational and statistical challenges in fitting such models to data.Non-local quantum computation and holography
Alex May Perimeter Institute
Relativistic quantum tasks are quantum computations which have inputs and outputs that occur at designated spacetime locations.
Understanding which tasks are possible to complete, and what resources are required to complete them, captures spacetime-specific aspects of quantum information. In this talk we explore the connections between such tasks and quantum gravity, specifically in the context of the AdS/CFT correspondence. We find that tasks reveal a novel connection between causal features of bulk geometry and boundary entanglement.
Further, we find that AdS/CFT suggests quantum non-local computations, a specific task with relevance to position-based cryptography, can be performed with linear entanglement. This would be an exponential improvement on existing protocols.
Quantum spacetime and deformed relativistic kinematics
Javier Relancio Universidad de Zaragoza
In this seminar, I will consider a deformed kinematics that goes beyond special relativity as a way to account for possible low-energy effects of a quantum gravity theory that could lead to some experimental evidences. This can be done while keeping a relativity principle, an approach which is usually known as doubly (or deformed) special relativity. In this context, I will give a simple geometric interpretation of the deformed kinematics and explain how it can be related to a metric in maximally symmetric curved momentum space. Moreover, this metric can be extended to the whole phase space, leading to a notion of spacetime. Also, this geometrical formalism can be generalized in order to take into account a space-time curvature, leading to a momentum deformation of general relativity. I will explain theoretical aspects and possible phenomenological consequences of such deformation.
Learning Deep ReLU Networks is Fixed-Parameter Tractable
Sitan Chen (MIT)We consider the problem of learning an unknown ReLU network with an arbitrary number of layers under Gaussian inputs and obtain the first nontrivial results for networks of depth more than two. We give an algorithm whose running time is a fixed polynomial in the ambient dimension and some (exponentially large) function of only the network's parameters. These results provably cannot be obtained using gradient-based methods and give the first example of a class of efficiently learnable neural networks that gradient descent will fail to learn. In contrast, prior work for learning networks of depth three or higher requires exponential time in the ambient dimension, while prior work for the depth-two case requires well-conditioned weights and/or positive coefficients to obtain efficient run-times. Our algorithm does not require these assumptions. Our main technical tool is a type of filtered PCA that can be used to iteratively recover an approximate basis for the subspace spanned by the hidden units in the first layer. Our analysis leverages new structural results on lattice polynomials from tropical geometry. Based on joint work with Adam Klivans and Raghu Meka.