Format results
- Alex Wein (New York University)
A voyage through undulating dark matter and the GUTs of u(48)
Joseph Tooby-Smith University of Cambridge
Generalized entropy in topological string theory
Gabriel Wong Harvard University
Cosmological tensions and how to ease them
Nils Schöneberg RWTH Aachen University
Learning Ising Models from One, Ten or a Thousand Samples
Costis Daskalakis (MIT)Reduction From Non-Unique Games to Boolean Unique Games
Dana Moshkovitz (University of Texas, Austin)Testing Noisy Linear Functions for Sparsity
Rocco Servedio (Columbia University)VC Dimension and Distribution-Free Sample-Based Testing
Eric Blais (University of Waterloo)Testing Monotonicity of Distributions Over High-Dimensional Posets
Maryam Aliakbarpour (University of Massachusetts Amherst)
Low-Degree Hardness of Random Optimization Problems
Alex Wein (New York University)In high-dimensional statistical problems (including planted clique, sparse PCA, community detection, etc.), the class of "low-degree polynomial algorithms" captures many leading algorithmic paradigms such as spectral methods, approximate message passing, and local algorithms on sparse graphs. As such, lower bounds against low-degree algorithms constitute concrete evidence for average-case hardness of statistical problems. This method has been widely successful at explaining and predicting statistical-to-computational gaps in these settings. While prior work has understood the power of low-degree algorithms for problems with a "planted" signal, we consider here the setting of "random optimization problems" (with no planted signal), including the problem of finding a large independent set in a random graph, as well as the problem of optimizing the Hamiltonian of mean-field spin glass models. Focusing on the independent set problem, I will define low-degree algorithms in this setting, argue that they capture the best known algorithms, and explain new proof techniques that give sharp lower bounds against low-degree algorithms in this setting. The proof involves a generalization of the so-called "overlap gap property", which is a structural property of the solution space. Based on arXiv:2004.12063 (joint with David Gamarnik and Aukosh Jagannath) and arXiv:2010.06563A voyage through undulating dark matter and the GUTs of u(48)
Joseph Tooby-Smith University of Cambridge
This talk will be split into two distinct halves: The first half will be based on the paper arxiv:2007.03662 and suggest that an interplay between microscopic and macroscopic physics can lead to an undulation on time scales not related to celestial dynamics. By searching for such undulations, the discovery potential of light DM search experiments can be enhanced.
The second half will look at some currently unpublished work into finding all the semi-simple subalgebras of u(48) which contain the SM. Such algebras (in a loose sense of the term) form GUTs and studying them has relevance to family unification, proton decay etc. Although there has been previous work into the classification of GUTs, we believe this is the first this broad question has been answered.Counting and Sampling Subgraphs in Sublinear Time
Talya Eden (MIT)In this talk I will shortly survey recent developments in approximate subgraph counting and sampling in sublinear-time. Both counting and sampling small subgraphs is a basic primitive, well studied both in theory and in practice. We consider these problems in the sublinear-time setting, where access to the graph $G$ is given via queries. We will consider both general graphs, and graphs of bounded arboricity which can be viewed as ``sparse everywhere" graphs, and we will see how we can use this property to obtain substantially faster algorithms.Learning and Testing for Gradient Descent
Emmanuel Abbe (EPFL)We present lower-bounds for the generalization error of gradient descent on free initializations, reducing the problem to testing the algorithm’s output under different data models. We then discuss lower-bounds on random initialization and present the problem of learning communities in the pruned-block-model, where it is conjectured that GD fails.Generalized entropy in topological string theory
Gabriel Wong Harvard University
The Ryu Takayanagi formula identifies the area of extremal surfaces in AdS with the entanglement entropy of the boundary CFT. However the bulk microstate interpretation of the extremal area remains mysterious. Progress along this direction requires understanding how to define entanglement entropy in the bulk closed string theory. As a toy model for AdS/CFT, we study the entanglement entropy of closed strings in the topological A model in the context of Gopakumar Vafa duality. We give a self consistent factorization of the closed string Hilbert space which leads to string edge modes transforming under a q-deformed surface symmetry group. Compatibility with this symmetry requires a q-deformed definition of entanglement entropy. Using the topological vertex formalism, we define the Hartle Hawking state for the resolved conifold and compute its q-deformed entropy directly from the closed string reduced density matrix. We show that this is the same as the generalized entropy, defined by prescribing a contractible replica manifold for the closed string theory on the resolved conifold. We then apply the Gopakumar Vafa duality to reproduce the closed string entropy from Chern Simons dual using the un-deformed definition of entanglement entropy. Finally we relate non local aspects of our factorization map to analogous phenomenon recently found in JT gravity.
Cosmological tensions and how to ease them
Nils Schöneberg RWTH Aachen University
Tensions between measurements in the early and the late universe could be the first hint of new physics beyond the cosmological standard model. In particular, the clustering of large scale structure and the current value of the Hubble parameter show intriguing discrepancies between measurements in the early and late universe. In this talk, I review the most common ways of easing these two tensions and focus specifically on parameter extensions and various models of dark matter, such as warm dark matter, cannibalistic dark matter, dark matter interactions, and dark radiation. To constrain these models a variety of cosmological probes, both current and future, is required at different scales and redshifts.
Learning Ising Models from One, Ten or a Thousand Samples
Costis Daskalakis (MIT)Samples from high-dimensional distributions can be scarce or expensive. Can we meaningfully learn such distributions from one or just a few samples? We provide guarantees for single-sample estimation of Ising models, quantifying the estimation error in terms of the metric entropy of possible interaction matrices. As corollaries of our main result, we derive bounds when the model's interaction matrix is a (sparse) linear combination of known matrices, or it belongs to a finite set, or to a high-dimensional manifold. Our result handles multiple independent samples by viewing them as one sample from a larger model, and can be used to derive estimation bounds that are qualitatively similar to state-of-the-art in the multiple-sample literature. We thus unify two separate strands of work in the literature: (1) a renaissance of recent work in Computer Science on estimating Ising models/MRFs from multiple independent samples under minimal assumptions about the model's interaction matrix; and (2) a growing literature in Probability Theory on estimating them from one sample in restrictive settings. On the technical front, we exploit novel concentration and anti-concentration inequalities for functions of the Ising model.Reduction From Non-Unique Games to Boolean Unique Games
Dana Moshkovitz (University of Texas, Austin)We reduce the problem of proving a "Boolean Unique Games Conjecture" (with gap 1-delta vs. 1-C*delta, for any C>1, and sufficiently small delta>0) to the problem of proving a PCP Theorem for a certain non-unique game. In a previous work, Khot and Moshkovitz suggested a candidate reduction (i.e., without a proof of soundness). The current work is the first to provide a reduction along with a proof of soundness. The non-unique game we reduce from is similar to non-unique games for which PCP theorems are known. Our proof relies on a new, tight, local testing theorem for the low degree part of functions f:R^n-->R in Gaussian space: We show that the low degree part of the restriction of f to a random hyperplane h is extremely close (O(1/n) Euclidean distance) to the restriction of the low degree part of f to h with high probability. This is joint work with Ronen Eldan from the Weizmann Institute.Testing Noisy Linear Functions for Sparsity
Rocco Servedio (Columbia University)Consider the following basic problem in sparse linear regression -- an algorithm gets labeled samples of the form (x, w.x + \eps) where w is an unknown n-dimensional vector, x is drawn from a background n-dimensional distribution D, and \eps is some independent noise. Given the promise that w is k-sparse, the breakthrough work of Candes, Romberg and Tao shows that w can be recovered with a number of samples that scales as O(k log n). This should be contrasted with general linear regression where O(n) samples are information theoretically necessary. We look at this problem from the vantage point of property testing, and study the complexity of determining whether the unknown vector w is k-sparse versus "far" from k-sparse. We show that this decision problem can be solved with a number of samples which is independent of n as long as the background distribution D is i.i.d. and its components are not Gaussian. We further show that weakening any of the conditions in this result necessarily makes the complexity scale logarithmically in n. Joint work with Xue Chen (George Mason University) and Anindya De (University of Pennsylvania).VC Dimension and Distribution-Free Sample-Based Testing
Eric Blais (University of Waterloo)When can we test functions more efficiently than we can learn them? We will consider this fundamental problem in the standard PAC learning setting, which corresponds to the sample-based distribution-free property testing model. The VC dimension of a class of functions characterizes the number of samples needed to learn the class. But since some properties can indeed be tested more efficiently than they can be learned, VC dimension by itself does not characterize the number of samples needed to test it. Nonetheless, we will see that VC dimension combined with a closely-related “lower VC dimension" variant does give strong lower bounds on the number of samples required to test many properties of functions, including halfspaces, intersection of halfspaces, polynomial threshold functions, and decision trees. This is joint work with Renato Ferreira Pinto Jr. and Nathan Harms.Testing Monotonicity of Distributions Over High-Dimensional Posets
Maryam Aliakbarpour (University of Massachusetts Amherst)In this talk, we consider the sample complexity required for testing the monotonicity of distributions over partial orders. A distribution p over a poset is monotone if, for any pair of domain elements x and y such that x \preceq y, p(x) \leq p(y). I am going to present the proof sketch of achieving a lower bound for this problem over a few posets, e.g., matchings, and hypercubes. The main idea of these lower bounds is the following: we introduce a new property called *bigness* over a finite domain, where the distribution is T-big if the minimum probability for any domain element is at least T. Relying on the framework of [Wu-Yang'15], we establish a lower bound of \Omega(n/\log n) for testing bigness of distributions on domains of size n. We then build on these lower bounds to give \Omega(n/\log{n}) lower bounds for testing monotonicity over a matching poset of size n and significantly improved lower bounds over the hypercube poset. Although finding a tight upper bound for testing monotonicity remains open, I will also discuss the steps we took towards the upper bound as well. Joint work with: Themis Gouleakis, John Peebles, Ronitt Rubinfeld, and Anak Yodpinyanee.Decoherence vs space-time diffusion: testing the quantum nature of gravity
Zachary Weller-Davies InstaDeep
Consistent dynamics which couples classical and quantum systems exists, provided it is stochastic. This provides a way to
study the back-reaction of quantum systems on classical ones and has recently been explored in the context of quantum fields back-reacting
on space-time. Since the dynamics is completely positive and circumvents various no-go theorems this can either be thought of as a fundamental theory, or as an effective theory describing the limit of quantum gravity where the gravitational degrees of freedom are taken to be classical. In this talk we explore some of the consequences of complete positivity on the dynamics of classical-quantum systems. We show that complete positivity necessarily results in the decoherence of the quantum system, and a breakdown of predictability in the classical-phase space. We prove there is a trade-off between the rate of this decoherence and the degree of diffusion in the metric: long coherence times require strong diffusion relative to the strength of the coupling, which potentially provides a long-distance experimental test of the quantum nature of gravity We discuss the consequences of complete positivity on preparing superpositions of gravitationally different states. Each state produces different distributions of the gravitational field determined by the constraints of the theory. The overlap of these distributions imposes an upper bound on the degree of coherence of the superposition.