Format results
- Costis Daskalakis (MIT)
Reduction From Non-Unique Games to Boolean Unique Games
Dana Moshkovitz (University of Texas, Austin)Testing Noisy Linear Functions for Sparsity
Rocco Servedio (Columbia University)VC Dimension and Distribution-Free Sample-Based Testing
Eric Blais (University of Waterloo)Testing Monotonicity of Distributions Over High-Dimensional Posets
Maryam Aliakbarpour (University of Massachusetts Amherst)Self-testing Bell inequalities from the stabiliser formalism and their applications
Flavio Baccari Max Planck Institute for Gravitational Physics - Albert Einstein Institute (AEI)
Dynamics and observational traces of cosmological ultra-supercooled phase transitions
Ryusuke Jinno Deutsches Elektronen-Synchrotron (DESY)
Self-torque and frame nutation in binary black hole simulations
Maria José Bustamante The University of Texas at Austin
Can we think time-symmetrically about causation?
Andrea Di Biagio Sapienza University of Rome
Symmetries, graph properties, and quantum speedups
Supartha Podder University of Ottawa
Exploring alternatives to quantum nonlocality
Indrajit Sen Chapman University
Learning Ising Models from One, Ten or a Thousand Samples
Costis Daskalakis (MIT)Samples from high-dimensional distributions can be scarce or expensive. Can we meaningfully learn such distributions from one or just a few samples? We provide guarantees for single-sample estimation of Ising models, quantifying the estimation error in terms of the metric entropy of possible interaction matrices. As corollaries of our main result, we derive bounds when the model's interaction matrix is a (sparse) linear combination of known matrices, or it belongs to a finite set, or to a high-dimensional manifold. Our result handles multiple independent samples by viewing them as one sample from a larger model, and can be used to derive estimation bounds that are qualitatively similar to state-of-the-art in the multiple-sample literature. We thus unify two separate strands of work in the literature: (1) a renaissance of recent work in Computer Science on estimating Ising models/MRFs from multiple independent samples under minimal assumptions about the model's interaction matrix; and (2) a growing literature in Probability Theory on estimating them from one sample in restrictive settings. On the technical front, we exploit novel concentration and anti-concentration inequalities for functions of the Ising model.Reduction From Non-Unique Games to Boolean Unique Games
Dana Moshkovitz (University of Texas, Austin)We reduce the problem of proving a "Boolean Unique Games Conjecture" (with gap 1-delta vs. 1-C*delta, for any C>1, and sufficiently small delta>0) to the problem of proving a PCP Theorem for a certain non-unique game. In a previous work, Khot and Moshkovitz suggested a candidate reduction (i.e., without a proof of soundness). The current work is the first to provide a reduction along with a proof of soundness. The non-unique game we reduce from is similar to non-unique games for which PCP theorems are known. Our proof relies on a new, tight, local testing theorem for the low degree part of functions f:R^n-->R in Gaussian space: We show that the low degree part of the restriction of f to a random hyperplane h is extremely close (O(1/n) Euclidean distance) to the restriction of the low degree part of f to h with high probability. This is joint work with Ronen Eldan from the Weizmann Institute.Testing Noisy Linear Functions for Sparsity
Rocco Servedio (Columbia University)Consider the following basic problem in sparse linear regression -- an algorithm gets labeled samples of the form (x, w.x + \eps) where w is an unknown n-dimensional vector, x is drawn from a background n-dimensional distribution D, and \eps is some independent noise. Given the promise that w is k-sparse, the breakthrough work of Candes, Romberg and Tao shows that w can be recovered with a number of samples that scales as O(k log n). This should be contrasted with general linear regression where O(n) samples are information theoretically necessary. We look at this problem from the vantage point of property testing, and study the complexity of determining whether the unknown vector w is k-sparse versus "far" from k-sparse. We show that this decision problem can be solved with a number of samples which is independent of n as long as the background distribution D is i.i.d. and its components are not Gaussian. We further show that weakening any of the conditions in this result necessarily makes the complexity scale logarithmically in n. Joint work with Xue Chen (George Mason University) and Anindya De (University of Pennsylvania).VC Dimension and Distribution-Free Sample-Based Testing
Eric Blais (University of Waterloo)When can we test functions more efficiently than we can learn them? We will consider this fundamental problem in the standard PAC learning setting, which corresponds to the sample-based distribution-free property testing model. The VC dimension of a class of functions characterizes the number of samples needed to learn the class. But since some properties can indeed be tested more efficiently than they can be learned, VC dimension by itself does not characterize the number of samples needed to test it. Nonetheless, we will see that VC dimension combined with a closely-related “lower VC dimension" variant does give strong lower bounds on the number of samples required to test many properties of functions, including halfspaces, intersection of halfspaces, polynomial threshold functions, and decision trees. This is joint work with Renato Ferreira Pinto Jr. and Nathan Harms.Testing Monotonicity of Distributions Over High-Dimensional Posets
Maryam Aliakbarpour (University of Massachusetts Amherst)In this talk, we consider the sample complexity required for testing the monotonicity of distributions over partial orders. A distribution p over a poset is monotone if, for any pair of domain elements x and y such that x \preceq y, p(x) \leq p(y). I am going to present the proof sketch of achieving a lower bound for this problem over a few posets, e.g., matchings, and hypercubes. The main idea of these lower bounds is the following: we introduce a new property called *bigness* over a finite domain, where the distribution is T-big if the minimum probability for any domain element is at least T. Relying on the framework of [Wu-Yang'15], we establish a lower bound of \Omega(n/\log n) for testing bigness of distributions on domains of size n. We then build on these lower bounds to give \Omega(n/\log{n}) lower bounds for testing monotonicity over a matching poset of size n and significantly improved lower bounds over the hypercube poset. Although finding a tight upper bound for testing monotonicity remains open, I will also discuss the steps we took towards the upper bound as well. Joint work with: Themis Gouleakis, John Peebles, Ronitt Rubinfeld, and Anak Yodpinyanee.Decoherence vs space-time diffusion: testing the quantum nature of gravity
Zachary Weller-Davies InstaDeep
Consistent dynamics which couples classical and quantum systems exists, provided it is stochastic. This provides a way to
study the back-reaction of quantum systems on classical ones and has recently been explored in the context of quantum fields back-reacting
on space-time. Since the dynamics is completely positive and circumvents various no-go theorems this can either be thought of as a fundamental theory, or as an effective theory describing the limit of quantum gravity where the gravitational degrees of freedom are taken to be classical. In this talk we explore some of the consequences of complete positivity on the dynamics of classical-quantum systems. We show that complete positivity necessarily results in the decoherence of the quantum system, and a breakdown of predictability in the classical-phase space. We prove there is a trade-off between the rate of this decoherence and the degree of diffusion in the metric: long coherence times require strong diffusion relative to the strength of the coupling, which potentially provides a long-distance experimental test of the quantum nature of gravity We discuss the consequences of complete positivity on preparing superpositions of gravitationally different states. Each state produces different distributions of the gravitational field determined by the constraints of the theory. The overlap of these distributions imposes an upper bound on the degree of coherence of the superposition.Self-testing Bell inequalities from the stabiliser formalism and their applications
Flavio Baccari Max Planck Institute for Gravitational Physics - Albert Einstein Institute (AEI)
I will introduce a tool to construct self-testing Bell inequalities from the stabiliser formalism and present two applications in the framework of device-independent certification protocols. Firstly, I will show how the method allows to derive Bell inequalities maximally violated by the family of multi-qubit graph states and suited for their robust self-testing. Secondly, I will present how the same method allows to introduce the first examples of subspace self-testing, a form of certification that the measured quantum state belongs to a given quantum error correction code subspace, which remarkably includes also mixed states.
Dynamics and observational traces of cosmological ultra-supercooled phase transitions
Ryusuke Jinno Deutsches Elektronen-Synchrotron (DESY)
In recent years, there has been growing interest in cosmological first-order phase transitions in view of gravitational wave observations with space interferometers such as LISA. However, there is only limited understanding on the bubble dynamics and the gravitational wave signals arising from ultra-supercooled transitions (in which the released energy dominates the plasma energy, i.e., near-vacuum transitions), due to the highly relativistic nature of the transition.
In this talk, I introduce some approaches to understand the dynamics and the gravitational wave signals of ultra-supercooled first-order phase transitions:
(1) These transitions proceed with the propagation and collision of highly relativistic fluid profiles involving shock waves. I introduce an approach to construct an effective description of the propagation of such relativistic profiles (1905.00899).
(2) I present an approach to extend the existing model of gravitational wave production and calculate the gravitational wave signals analytically (1707.03111).
Self-torque and frame nutation in binary black hole simulations
Maria José Bustamante The University of Texas at Austin
We investigate the precession of the spin of the smaller black hole in binary black hole simulations. By considering a sequence of binaries at higher mass ratios, we approach the limit of geodetic precession of a test spin. This precession is corrected by the ``self-torque'' due to the smaller black hole's own spacetime curvature. We find that the spins undergo spin nutations which are not described in conventional descriptions of spin precession, an effect that has been noticed previously in simulations. These nutations arise because the spins are not measured in a frame where the smaller hole is stationary. We develop a simple model for these frame nutations, extract the instantaneous spin precession rate, and compare our results to PN and extreme-mass-ratio approximations for the self-torque.
Can we think time-symmetrically about causation?
Andrea Di Biagio Sapienza University of Rome
We often say that quantum mechanics allows to calculate the probability of future events. In fact, quantum mechanics does not discriminate between predicting the future or postdicting the past. I will present the results of a recent work by Rovelli, Donà and me, where we address the apparent tension between the time symmetry of elementary quantum mechanics and the intrinsic time orientation of the formulations of quantum theory used in the quantum information and foundations communities. Additionally, I will sketch a way to think time symmetrically about causality in quantum theory by using the new notion of a causal-inferential theory recently proposed by Schimd, Selby and Spekkens.
Symmetries, graph properties, and quantum speedups
Supartha Podder University of Ottawa
Aaronson and Ambainis (2009) and Chailloux (2018) showed that fully symmetric (partial) functions do not admit exponential quantum query speedups. This raises a natural question: how symmetric must a function be before it cannot exhibit a large quantum speedup? In this work, we prove that hypergraph symmetries in the adjacency matrix model allow at most a polynomial separation between randomized and quantum query complexities. We also show that, remarkably, permutation groups constructed out of these symmetries are essentially the only permutation groups that prevent super-polynomial quantum speedups. We prove this by fully characterizing the primitive permutation groups that allow super-polynomial quantum speedups. In contrast, in the adjacency list model for bounded-degree graphs (where graph symmetry is manifested differently), we exhibit a property testing problem that shows an exponential quantum speedup. These results resolve open questions posed by Ambainis, Childs, and Liu (2010) and Montanaro and de Wolf (2013). Based on: arxiv:2006.12760
Exploring alternatives to quantum nonlocality
Indrajit Sen Chapman University
In recent years, it has become increasingly well-known that nearly all the major no-go theorems in quantum foundations can be circumvented by violating a single assumption: the hidden variables (that determine the outcomes) are uncorrelated with the measurement settings. A hidden-variable theory that violates this assumption can be local, separable, non-contextual and have an epistemic quantum state. Such a theory would be particularly well-suited to relativistic contexts. Are such theories actually feasible? In this talk, we discuss some results on the two physical options to violate this assumption: superdeterminism and retrocausality.
Developing an intuitive criticism by Bell, we show that superdeterministic models are conspiratorial in a mathematically well-defined sense in two separate ways. In the first approach, we use the concept of quantum nonequilibrium to show that superdeterministic models require finetuning so that the measurement statistics do not depend on the details of how the measurement settings are chosen. In the second approach, we show (without using quantum non-equilibrium) that an arbitrarily large amount of superdeterministic correlation is needed for such models to be consistent. Along the way, we discuss an apparent paradox involving nonlocal signalling in a local superdeterministic model.
Next, we use retrocausality to build a local, separable, psi-epistemic hidden-variable model of Bell correlations with pilot-waves in physical space. We generalise the model to describe a relativistic Bell scenario where one of the wings experiences time-dilation effects. We show, by discussing the difficulties faced by other hidden-variable approaches in describing this scenario, that the relativistic properties of the model play an important role here (otherwise ornamental in the standard Bell scenario). We also discuss the technical difficulties in applying quantum field theory to recover the model's predictions.