Format results
- Christopher Monroe (University of Maryland)
On Entropy from Random Circuit Sampling
Scott Aaronson (University of Texas at Austin)Towards Closing the Loopholes of Showing a Quantum Advantage for Quantum Simulators
Jens Eisert (Freie Universität Berlin)Quantum Supremacy Using a Programmable Superconducting Processor I
Sergio Boixo (Google Research)Homomorphic Encryption in the SPDZ Protocol for MPC
Peter Scholl, Aarhus UniversityPractical Lattice-based Cryptography in PALISADE
Yuriy Polyakov, NJIT and DualityPractical Applications of Homomorphic Encryption
Hao Chen, Microsoft ResearchQuantum-reduced loop gravity from the perspective of full LQG
Ilkka Mäkinen University of Warsaw
Why standard entanglement theory is inappropriate for the study of Bell scenarios
David Schmid Perimeter Institute for Theoretical Physics
Quantum Computing and Simulation with Atoms
Christopher Monroe (University of Maryland)Trapped atomic ions crystals are among the most promising platforms for fully universal quantum computer systems as well as simulators of Hamiltonian spin models. Trapped ion spins/qubits have no practical limits to their idle coherence times, and because they are perfectly replicable atomic clocks, have the ability to be scaled. Small quantum algorithms with up to about 20 qubits and a universal fully-connected and reconfigurable gate set have been demonstrated, mainly for the benchmarking of the systems. On the other hand, quantum simulations of hard quantum problems with more than 50 quantum bits have been implemented, studying magnetic quench dynamics, oundational aspects of quantum nonequlibrium statistical mechanics, molecular structure eigensolvers, and quantum approximate optimization routines. I will speculate on how things might proceed in the next few years in this system. While it remains a great challenge to build a quantum computer big enough to be useful for society, the good news with a system of trapped atomic ions, there do not appear to be any fundamental limits ahead.On Entropy from Random Circuit Sampling
Scott Aaronson (University of Texas at Austin)Two years ago, I proposed that near-term, sampling-based quantum supremacy experiments, such as Google's, could be repurposed to generate bits that are certifiably random under computational assumptions. In this talk, I'll discuss an aspect of that proposal that I haven't had time to go into in earlier talks: namely, how does one actually derive the conclusion that fresh entropy is being generated, from relatively "standard" computational assumptions? I'll also discuss the major challenges that remain in turning this proposal into a genuine near-term application of quantum computers.Towards Closing the Loopholes of Showing a Quantum Advantage for Quantum Simulators
Jens Eisert (Freie Universität Berlin)Quantum devices promise the efficient solution of problems out of reach for classical computers. However, before reaching the ultimate goal of realizing full fault tolerant quantum computers, significant effort have been made towards showing an unambiguous quantum advantage. In this talk, we will discuss prospects of achieving a complexity-theoretic quantum advantage that lives up to mathematically rigorous standards and is experimentally practically feasible with translationally invariant quantum simulators [1-4]. On a mathematical technical level, we will see how proof techniques of black-box verification [3] and of average-case complexity and anticoncentration come into play. On a physical level, we will elaborate on how such schemes may be realized with realistic near-term quantum devices reminiscent of quantum simulators. Particular emphasis will be put on showing how to prove anti-concentration, building upon recently developed techniques for random circuit sampling. We develop new techniques that exploit the insight that approximate 2-designs for the unitary group admit anti-concentration. We prove that the 2D translation-invariant, constant depth architectures of quantum simulation form approximate 2-designs, thus obtaining a significantly stronger result [1]. In an interlude interesting in its own right, we will see how higher-order unitary designs can be realized by means of random Clifford gates, into which a number of non-Clifford gates is interspersed that is - remarkably - independent of the system size [5]. If time allows, I will briefly mention ongoing efforts towards proving quantum advantages in distribution learning. [1] Closing gaps of a quantum advantage with short-time Hamiltonian dynamics, J. Haferkamp, D. Hangleiter, A. Bouland, B. Fefferman, J. Eisert, J. Bermejo-Vega, arXiv:1908.08069 (2019). [2] Architectures for quantum simulation showing a quantum speedup, J. Bermejo-Vega, D. Hangleiter, M. Schwarz, R. Raussendorf, J. Eisert Phys. Rev. X 8, 021010 (2018). [3] Sample complexity of device-independently certified quantum supremacy, D. Hangleiter, M. Kliesch, J. Eisert, C. Gogolin, Phys. Rev. Lett. 122, 210502 (2019). [4] Dynamical structure factors of dynamical quantum simulators, M. Laura Baez, M. Goihl, J. Haferkamp, J. Bermejo-Vega, M. Gluza, J. Eisert, arXiv:1912.06076 (2019). [5] Quantum homeopathy works: Efficient unitary designs with a system-size independent number of non-Clifford gates, J. Haferkamp, F. Montealegre-Mora, M. Heinrich, J. Eisert, D. Gross, I. Roth, arXiv:2002.09524 (2020).Quantum Supremacy Using a Programmable Superconducting Processor I
Sergio Boixo (Google Research)The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 2^53 (about 10^16). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm.Amortized FHEW Bootstrapping
Jessica Sorrell, UC San DiegoThe FHEW fully homomorphic encryption scheme of Ducas and Micciancio offers very fast homomorphic NAND-gate computations (on encrypted data) and a relatively fast refreshing procedure that allows to homomorphically evaluate arbitrary NAND boolean circuits. Unfortunately, the refreshing procedure needs to be executed after every single NAND computation, and each refreshing operates on a single encrypted bit, greatly decreasing the overall throughput of the scheme. We give a new refreshing procedure that simultaneously refreshes n FHEW ciphertexts, at a cost comparable to a single-bit FHEW refreshing operation. As a result, the cost of each refreshing is amortized over n encrypted bits, improving the throughput for the homomorphic evaluation of boolean circuits roughly by a factor n.Homomorphic Encryption in the SPDZ Protocol for MPC
Peter Scholl, Aarhus UniversityNo abstract available.Practical Lattice-based Cryptography in PALISADE
Yuriy Polyakov, NJIT and DualitySeveral lattice-based cryptography primitives and protocols are now practical and even available in commercial products, for example, public-key cryptography, homomorphic encryption, proxy re-encryption (PRE), and digital signatures. Many of these primitives based on the Learning With Errors problem are implemented in the PALISADE lattice cryptography library. This talk presents a survey of state-of-the-art lattice algorithms implemented in PALISADE (both for already practical primitives and research prototypes of more advanced capabilities) and discusses some of their applications. The first part of the talk focuses on FHE schemes and their extensions, such as PRE and threshold FHE. The remaining part is centered around lattice trapdoors and the protocols where lattice trapdoors are used as a primitive, including identity-based encryption, key-policy attribute-based encryption, and program obfuscation.Practical Applications of Homomorphic Encryption
Hao Chen, Microsoft ResearchNo abstract available.Quantum-reduced loop gravity from the perspective of full LQG
Ilkka Mäkinen University of Warsaw
Quantum-reduced loop gravity is a model of loop quantum gravity, whose characteristic feature is the considerable simplicity of its kinematical structure in comparison with that of full loop quantum gravity. The model therefore provides an accessible testing ground for probing the physical implications of loop quantum gravity. In my talk I will give a brief introduction to quantum-reduced loop gravity, and examine the relation between the quantum-reduced model and full loop quantum gravity. In particular, I will focus on clarifying how the operators of the quantum reduced model are related to those of the full theory. I will show that despite their simplicity, the operators of the quantum-reduced model are simply the operators of the full theory acting on states in the Hilbert space of the quantum-reduced model. In order to pass from the full theory operators to the "reduced" operators, one only has to keep in mind that the states of the quantum-reduced model are labeled with large spins, and discard terms which are of lower than leading order in j.
Why standard entanglement theory is inappropriate for the study of Bell scenarios
David Schmid Perimeter Institute for Theoretical Physics
A standard approach to quantifying resources is to determine which operations on the resources are freely available and to deduce the ordering relation among the resources that these operations induce. If the resource of interest is the nonclassicality of the correlations embodied in a quantum state, that is, entanglement, then it is typically presumed that the appropriate choice of free operations is local operations and classical communication (LOCC). We here argue that, in spite of the near-universal endorsement of the LOCC paradigm by the quantum information community, this is the wrong choice for one of the most prominent applications of entanglement theory, namely, the study of Bell scenarios. The nonclassicality of correlations in such scenarios, we argue, should be quantified instead by local operations and shared randomness (LOSR). We support this thesis by showing that various perverse features of the interplay between entanglement and nonlocality are merely an artifact of the use of LOCC-entanglement and that the interplay between LOSR-entanglement and nonlocality is natural and intuitive. Specifically, we show that the LOSR paradigm (i) provides a resolution of the "anomaly of nonlocality", wherein partially entangled states exhibit more nonlocality than maximally entangled states, (ii) entails a notion of genuine multipartite entanglement that is distinct from the conventional one and which is free of several of its pathological features, and (iii) makes possible a resource-theoretic account of the self-testing of entangled states which simplifies and generalizes prior results. Along the way, we derive some fundamental results concerning the necessary and sufficient conditions for convertibility between pure entangled states under LOSR and highlight some of their consequences, such as the impossibility of catalysis for bipartite pure states.