Format results
-
Good Approximate Quantum LDPC Codes from Spacetime Circuit Hamiltonians
Thom Bohdanowicz California Institute of Technology
-
General resource theories in quantum mechanics and beyond: operational characterization via discrimination tasks
Ryuji Takagi Massachusetts Institute of Technology (MIT)
-
Entropic uncertainty relations for quantum-information scrambling
Nicole Yunger Halpern National Institute of Standards and Technology
-
A Quantum Multiparty Packing Lemma and the Relay Channel
Hrant Gharibyan Stanford University
-
Circuit Hamiltonians, Hamiltonian complexity, and approximate error correction.
Henry Yuen University of Toronto
-
-
-
Decay of correlations in long-range interacting systems at non-zero temperature
Senaida Hernandez Santana Institute of Photonic Sciences (ICFO)
-
-
Untangling entanglement and chaos
Meenu Kumari National Research Council Canada (NRC)
-
Classifying the simulation complexities of extended Clifford circuits
Dax Koh Institute of High Performance Computing
-
From estimation of quantum probabilities to simulation of quantum circuits
Hakop Pashayan Freie Universität Berlin
-
Good Approximate Quantum LDPC Codes from Spacetime Circuit Hamiltonians
Thom Bohdanowicz California Institute of Technology
We study approximate quantum low-density parity-check (QLDPC) codes, which are approximate quantum error-correcting codes specified as the ground space of a frustration-free local Hamiltonian, whose terms do not necessarily commute. Such codes generalize stabilizer QLDPC codes, which are exact quantum error-correcting codes with sparse, low-weight stabilizer generators (i.e. each stabilizer generator acts on a few qubits, and each qubit participates in a few stabilizer generators). Our investigation is motivated by an important question in Hamiltonian complexity and quantum coding theory: do stabilizer QLDPC codes with constant rate, linear distance, and constant-weight stabilizers exist? We show that obtaining such optimal scaling of parameters (modulo polylogarithmic corrections) is possible if we go beyond stabilizer codes: we prove the existence of a family of [[N,k,d,ε]] approximate QLDPC codes that encode k = Ω(N/polylog N) into N physical qubits with distance d = Ω(N/polylog N) and approximation infidelity ε = 1/polylog N. We prove the existence of an efficient encoding map, and we show that arbitrary Pauli errors can be locally detected by circuits of polylogarithmic depth. Finally, we show that the spectral gap of the code Hamiltonian is Ω(N^(-3.09)) (up to polylog(N) factors) by analyzing a spacetime circuit-to-Hamiltonian construction for a bitonic sorting network architecture that is spatially local in polylog(N) spatial dimensions. (Joint work with Elizabeth Crosson, Chinmay Nirkhe, and Henry Yuen, arXiv:1811.00277)
-
General resource theories in quantum mechanics and beyond: operational characterization via discrimination tasks
Ryuji Takagi Massachusetts Institute of Technology (MIT)
One of the central problems in the study of quantum resource theories is to provide a given resource with an operational meaning, characterizing physical tasks relevant to information processing in which the resource can give an explicit advantage over all resourceless states. We show that this can always be accomplished for all convex resource theories. We establish in particular that any resource state enables an advantage in a channel discrimination task, allowing for a strictly greater success probability than any state without the given resource. Furthermore, we find that the generalized robustness measure serves as an exact quantifier for the maximal advantage enabled by the given resource state in a class of subchannel discrimination problems, providing a universal operational interpretation to this fundamental resource quantifier.
Next, we significantly extend the above consideration beyond "quantum" resource theories of "states"; we establish an operational characterization of general convex resource theories --- describing the resource content of not only states, but also measurements and channels, both within quantum mechanics and in general probabilistic theories (GPTs) --- in the context of state and channel discrimination. We find that discrimination tasks provide a unified operational description for quantification and manipulation of resources by showing that the family of robustness measures can be understood as the maximum advantage provided by any physical resource in several different discrimination tasks, as well as establishing that such discrimination problems can fully characterize the allowed transformations within the given resource theory. Our results establish a fundamental connection between the operational tasks of discrimination and core concepts of resource theories --- the geometric quantification of resources and resource manipulation --- valid for all physical theories beyond quantum mechanics with no additional assumptions about the structure of the GPT required.
References:
[1] Ryuji Takagi, Bartosz Regula, Kaifeng Bu, Zi-Wen Liu, and Gerardo Adesso, "Operational Advantage of Quantum Resources in Subchannel Discrimination", Phys. Rev. Lett. 122.140402 (2019)
[2] Ryuji Takagi and Bartosz Regula, "General resource theories in quantum mechanics and beyond: operational characterization via discrimination tasks", arXiv: 1901.08127
-
Entropic uncertainty relations for quantum-information scrambling
Nicole Yunger Halpern National Institute of Standards and Technology
How violently do two quantum operators disagree? Different subfields of physics feature different notions of incompatibility: i) In quantum information theory, uncertainty relations are cast in terms of entropies. These entropic uncertainty relations constrain measurement outcomes. ii) Condensed matter and high-energy physics feature interacting quantum many-body systems, such as spin chains. A local perturbation, such as a Pauli operator on one side of a chain, preads through many-body entanglement. The perturbation comes to overlap, and to disagree, with probes localized on the opposite side of the system. This disagreement signals that quantum information about the perturbation has scrambled, or become hidden in highly nonlocal correlations. I will unite these two notions of quantum operator disagreement, presenting an entropic uncertainty relation for quantum-information scrambling. The entropies are of distributions over weak and strong measurements' possible outcomes. The uncertainty bound strengthens when a spin chain scrambles in numerical simulations. Hence the subfields - quantum information, condensed matter, and high-energy physics - can agree about when quantum operations disagree. Our relation can be tested experimentally with superconducting qubits, trapped ions, and quantum dots.
NYH, Bartolotta, and Pollack, accepted by Comms. Phys. (in press) arXiv:1806.04147.
-
A Quantum Multiparty Packing Lemma and the Relay Channel
Hrant Gharibyan Stanford University
Optimally encoding classical information in a quantum system is one of the oldest and most fundamental challenges of quantum information theory. Holevo’s bound places a hard upper limit on such encodings, while the Holevo-Schumacher-Westmoreland (HSW) theorem addresses the question of how many classical messages can be “packed” into a given quantum system. In this article, we use Sen’s recent quantum joint typicality results to prove a one-shot multiparty quantum packing lemma generalizing the HSW theorem. The lemma is designed to be easily applicable in many network communication scenarios. As an illustration, we use it to straightforwardly obtain quantum generalizations of well-known classical coding schemes for the relay channel: multihop, coherent multihop, decode-forward, and partial decode-forward. We provide both finite blocklength and asymptotic results, the latter matching existing formulas. Given the key role of the classical packing lemma in network information theory, our packing lemma should help open the field to direct quantum generalization.
-
Circuit Hamiltonians, Hamiltonian complexity, and approximate error correction.
Henry Yuen University of Toronto
In this talk, I will discuss some interesting connections between Hamiltonian complexity, error correction, and quantum circuits. First, motivated by the Quantum PCP Conjecture, I will describe a construction of a family of local Hamiltonians where the complexity of ground states — even when subject to large amounts of noise — is superpolynomial (under plausible complexity assumptions). The construction is simple, making use of the well-known Feynman-Kitaev circuit Hamiltonian construction.
Next, I will describe how this complexity result can be repurposed to construct local approximate error correcting codes with (nearly) linear distance and (nearly) constant rate. By contrast, it remains an open problem to construct quantum low-density parity check (QLDPC) stabilizer codes that achieve similarly good parameters.
An interesting component of our code construction is the Brueckmann-Terhal spacetime circuit Hamiltonian construction, and a novel analysis of its spectral gap. Our codes are ground spaces of non-commuting — but frustration-free -- local Hamiltonians. We believe that our results present additional evidence that the study of approximate error correction and non-stabilizer codes may be much richer than previously thought.
This talk will have more questions than answers, and is based on two papers:
· https://arxiv.org/abs/1802.07419 (joint with Chinmay Nirkhe and Umesh Vazirani)
· https://arxiv.org/abs/1811.00277 (joint work with Thom Bohdanowicz, Elizabeth Crosson, and Chinmay Nirkhe)
-
Covariant Quantum Error correction: an approximate Eastin-Knill theorem, reference frame encoding, and continuous symmetries in AdS/CFT
Grant Salton Amazon.com
In the usual paradigm of quantum error correction, the information to be protected can be encoded in a system of abstract qubits or modes. But how does this work for physical information, which cannot be described in this way? Just as direction information cannot be conveyed using a sequence of words if the parties involved do not share a reference frame, physical quantum information cannot be conveyed using a sequence of qubits or modes without a shared reference frame. Covariant quantum error correction is a procedure for protecting such physical information against noise in such a way that the encoding and decoding operations transform covariantly with respect to an external symmetry group. In this talk, we'll study covariant QEC, and we will see that there do not exist finite dimensional quantum codes that are covariant with respect to continuous symmetries. Conversely, we'll see that there do exist finite codes for finite groups, and continuous variable (CV) codes for continuous groups. This leads to a CV method of circumventing the Eastin-Knill theorem. By relaxing our requirements to allow for only approximate error correction and covariance, we'll find a fundamental tension between a code's ability to approximately correct errors and covariance with respect to a continuous symmetry. In this way, we'll arrive at an approximate version of the Eastin-Knill theorem, and we'll end by learning what covariant QEC tells us about continuous symmetries in AdS/CFT, among other applications.
-
Leggett-Garg Inequalities: Decisive Tests for Macrorealism and Protocols for Non-Invasive Measurements
Jonathan Halliwell Imperial College London
The Leggett-Garg (LG) inequalities were introduced, as a temporal parallel of the Bell inequalities, to test macroscopic realism -- the view that a macroscopic system evolving in time possesses definite properties which can be determined without disturbing the future or past state. The talk will begin with a review of the LG framework. Unlike the Bell inequalities, the original LG inequalities are only a necessary condition for macrorealism, and are therefore not a decisive test. I argue, for the case of measurements of a single dichotomic variable Q, that when the original four three-time LG inequalities are augmented with a set of twelve two-time inequalities also of the LG form, Fine's theorem applies and these augmented conditions are then both necessary and sufficient [1]. A comparison is carried out with the alternative necessary and sufficient conditions for macrorealism based on no-signaling in time conditions which ensure that all probabilities for Q at one and two times are independent of whether earlier or intermediate measurements are made. I argue that the two tests differ in their implementation of the key requirement of non-invasive measurability so are testing different notions of macrorealism, and these notions are elucidated. I also describe some alternative protocols which achieve non-invasiveness, one involving continuous measurement of the velocity conjugate to Q [2], which was recently implemented in an experiment at IQC, the other involving a modification of the standard ideal negative measurement protocol [3].
[1] J.J.Halliwell, Phys Rev A96, 012121 (2017); A93, 022123 (2016); arxiv:1811.10408.
[2] J.J.Halliwell, Phys. Rev. A94, 052114 (2016).
[3] J.J.Halliwell, Phys. Rev. A99, 022119 (2019).
-
Decay of correlations in long-range interacting systems at non-zero temperature
Senaida Hernandez Santana Institute of Photonic Sciences (ICFO)
We study correlations in fermionic systems with long-range interactions in thermal equilibrium. We prove an upper-bound on the correlation decay between anti-commut-ing operators based on long-range Lieb-Robinson type bounds. Our result shows that correlations between such operators in fermionic long-range systems of spatial dimension $D$ with at most two-site interactions decaying algebraically with the distance with an exponent $\alpha \geq 2\,D$, decay at least algebraically with an exponent arbitrarily close to $\alpha$. Our bound is asymptotically tight, which we demonstrate by numerically analysing density-density correlations in a 1D quadratic (free, exactly solvable) model, the Kitaev chain with long-range interactions. Away from the quantum critical point correlations in this model are found to decay asymptotically as slowly as our bound permits.
-
Chris Cade: Post-selected classical query complexity
The precise relationship between post-selected classical and
post-selected quantum computation is an open problem in complexity
theory. Post-selection has proven to be a useful tool in uncovering some
of the differences between quantum and classical theories, in
foundations and elsewhere. This is no less true in the area of
computational complexity -- quantum computations augmented with
post-selection are thought to be vastly more powerful than their
classical counterparts. However, the precise reasons why this might be
the case are not well understood, and no rigorous separations between
the two have been found. In this talk, I will consider the difference in
computational power of classical and quantum post-selection in the
computational query complexity setting.
We define post-selected classical query algorithms, and relate them to
rational approximations of Boolean functions; in particular, by showing
that the post-selected classical query complexity of a Boolean function
is equal to the minimal degree of a rational function with nonnegative
coefficients that approximates it (up to a factor of two). For
post-selected quantum query algorithms, a similar relationship was shown
by Mahadev and de Wolf, where the rational approximations are allowed to
have negative coefficients. Using our characterisation, we find an
exponentially large separation between post-selected classical query
complexity and post-selected quantum query complexity, by proving a
lower bound on the degree of rational approximations to the Majority
function. -
Untangling entanglement and chaos
Meenu Kumari National Research Council Canada (NRC)
How does classical chaos affect the generation of quantum entanglement? What signatures of chaos exist at the quantum level and how can they be quantified? These questions have puzzled physicists for a couple of decades now. We answer these questions in spin systems by analytically establishing a connection between entanglement generation and a measure of delocalization of a quantum state in such systems. While delocalization is a generic feature of quantum chaotic systems, it is more nuanced in regular systems. We explore when the quantum dynamics mimics a localized classical trajectory, and find criteria to quantify Bohr's correspondence principle in periodically driven spin systems. These criteria are typically violated in a deep quantum regime due to delocalized evolution. Using our criteria, we establish that entanglement is a signature of chaos only in a semiclassical regime. Our work provides a new approach to analyzing quantum chaos and designing systems that can efficiently generate entanglement. This work has been done in collaboration with Prof. Shohini Ghose.
References: arXiv:1806.10545 (2018) and PRE 97, 052209 (2018).
-
Classifying the simulation complexities of extended Clifford circuits
Dax Koh Institute of High Performance Computing
Extended Clifford circuits straddle the boundary between classical and quantum computational power. Whether such circuits are efficiently classically simulable seems to depend delicately on the ingredients of the circuits. While some combinations of ingredients lead to efficiently classically simulable circuits, other combinations, which might just be slightly different, lead to circuits which are likely not. We extend the results of Jozsa and Van den Nest [Quantum Inf. Comput. 14, 633 (2014)] by studying various further extensions of Clifford circuits. First, we consider how the classical simulation complexity changes when we allow for more general measurements. Second, we investigate different notions of what it means to "classically simulate" a quantum circuit. Third, we consider the class of conjugated Clifford circuits, where one conjugates every qubit in a Clifford circuit by the same single-qubit gate. Our results provide more examples where seemingly modest changes to the ingredients of Clifford circuits lead to "large" changes in the classical simulation complexities of the circuits, and also include new examples of extended Clifford circuits that are potential candidates for "quantum supremacy". Based on Quantum Inf. Comput. 17, 0262 (2017) and proceedings of CCC’18 pp.21:1–21:25 (2018) (joint work with Adam Bouland and Joseph F. Fitzsimons).
-
From estimation of quantum probabilities to simulation of quantum circuits
Hakop Pashayan Freie Universität Berlin
We show that there are two distinct obstacles to an efficient
classical simulation of a general quantum circuit. The first obstacle, which
has been well-studied, is the difficulty of efficiently estimating the
probability associated with a particular measurement outcome. We show that
this alone does not determine whether a quantum circuit can be efficiently
simulated. The second obstacle is that, in general, there can be an
exponential number of `relevant' outcomes that are needed for an accurate
simulation, and so efficient simulation may not be possible even in
situations where the probabilities of individual outcomes can be efficiently
estimated. We show that these two obstacles are distinct, overcoming the
former obstacle being necessary but not sufficient for simulability whilst
overcoming the pair is jointly sufficient for simulability. Specifically, we
prove that a family of quantum circuits is efficiently simulable if it
satisfies two properties: one related to the efficiency of Born rule
probability estimation, and the other related to the sparsity of the outcome
distribution. We then prove a pair of hardness results (using standard
complexity assumptions and a variant of a commonly-used average case
hardness conjecture), where we identify families of quantum circuits that
satisfy one property but not the other, and for which efficient simulation
is not possible. To prove our results, we consider a notion of simulation of
quantum circuits that we call epsilon-simulation. This notion is less
stringent than exact sampling and is now in common use in quantum hardness
proofs.