I describe a novel way to produce states associated to geodesic motion for classical particles in the bulk of AdS that arise from particular operator insertions at the boundary
at a fixed time. When extended to black hole setups, one can understand how to map back the geometric information of the geodesics back to
the properties of these operators. In particular, the presence of stable circular orbits in global AdS are analyzed. The classical Innermost Stable Circular Orbit
(ISCO) play an important role separating metastable state excitations from those that quickly fall in the horizon of black hole. In the dual CFT, this metastability effect must be a non-perturbative effect due to the curvature on the boundary.
Pauli channels are ubiquitous in quantum information, both as a dominant noise source in many computing architectures and as a practical model for analyzing error correction and fault tolerance. Here we prove several results on efficiently learning Pauli channels, and more generally the Pauli projection of a quantum channel. We first derive a procedure for learning a Pauli channel on n qubits to a fixed relative precision with O(n 2^n) measurements. For a Pauli channel with only s nonzero error rates, we show how to learn it with only O(s n) measurements. Finally, we show that when the Pauli channel is given by a Markov field with at most k-local correlations, we can learn an entire n-qubit Pauli channel with only O(n^2 log n) measurements, which is efficient in the number of qubits. These results enable a host of applications beyond just characterizing noise in a large-scale quantum system: they pave the way to tailoring quantum codes, optimizing decoders, and customizing fault tolerance procedures to suit a particular device. Joint work with Robin Harper, Joel Wallman, and Wenjun Yu, arXiv:1907.13022, arXiv:1907.12976.
Testbed-class quantum computers -- fully programmable 5-50 qubit systems -- have burst onto the scene in the past few years. The associated surge in funding, hype, and commercial activity has spurred interest in "benchmarks" for assessing their performance. Unsurprisingly, this has generated both a number of scientifically interesting ideas *and* a lot of confusion and kerfuffle. I will try to explain the state of play in this field -- known historically as "quantum characterization, verification, and validation (QCVV)" and more recently and generally as "quantum performance assessment" -- by briefly reviewing its history, explaining the different categories of benchmarks and characterization protocols, and identifying what they're good for. The overarching message of my talk will be these are distinct tools in a diverse toolbox -- almost every known protocol and benchmark really measures a distinct and particular thing, and we probably need *more* of them, not fewer.
Cycle benchmarking is a new approach for scalable, complete and efficient error diagnostics that will be essential to understanding and improving quantum computing performance from the NISQ era to fault-tolerance. Cycle benchmarking is born from ideas of randomized benchmarking and exposes tomographic methods as impractical and obsolete. When combined with randomized compiling, cycle benchmarking can identify the full impact of errors and error correlations for any (parallel) gate-combination of interest. I will show cycle benchmarking data from experimental implementations on multi-qubit superconducting qubit and ion trap quantum computers revealing that: (1) in leading platforms, cross-talk and other error correlations can be much more severe than expected, even many order-of-magnitude larger than expected based on independent error models; (2) these cross-talk errors induce errors on other qubits (e.g., idling qubits) that are an order of magnitude larger than the errors on the qubits in the domain of the gate operation; and thus (3) the notion of "elementary gate error rates" is not adequate for assessing quantum computing operations and cycle benchmarking provides the tool to provide an accurate assessment. I will then discuss how the aggregate error rates measured under cycle benchmarking can be applied to provide a practical bound on the accuracy of applications in what I call the "quantum discovery regime" where quantum solutions can no longer be checked via HPCs.
Recently, protocols based on statistical correlations of randomized measurements were developed for probing and verifying engineered quantum many-body systems. After a general introduction in the context of Renyi entropies, I focus in this talk on the cross-platform verification of quantum computers and simulators by means of fidelity measurements. I show how to measure the overlap between (reduced) density matrices, and thus a (mixed-state) fidelity of two quantum states prepared on separate experimental platforms. The protocol requires only local measurements in randomized product bases and classical communication between the devices. As a proof-of-principle, I present the measurement of experiment-theory fidelities for entangled 10-qubit quantum states in a trapped ion quantum simulator. To conclude, I will present further applications of randomized measurements for probing quantities beyond standard observables, such as out-of-time-ordered correlation functions.
In this talk I revisit the canonical framework for general relativity in its connection-frame field formulation, exploiting its local holographic nature. I will show how we can understand the Gauss law, the Bianchi identity and the space diffeomorphism constraints as conservation laws for local surface charges. These charges being respectively the electric flux, the dual magnetic flux and momentum charges. Quantization of the surface charge algebra can be done in terms of Kac-Moody edge modes. This leads to an enhanced theory upgrading spin networks to tube networks carrying Virasoro representations. Taking a finite dimensional truncation of this quantization yields states of quantum geometry, dubbed `Poincaré charge networks’, which carry a representation of the 3D diffeomorphism boundary charges on top of the SU(2) fluxes and gauge transformations. This opens the possibility to have for the first time a framework where spatial diffeomorphism are represented at the quantum level. Moreover, our construction leads naturally to the picture that the relevant geometrical degrees of freedom live on boundaries, that their dynamics and the fabric of quantum space itself is encoded into their entanglement, and it is designed to offer a new setting to study the coarse-graining of gravity both at the classical and the quantum levels.
In this talk, we will discuss the overall framework for combining many of the techniques established in previous talks of this workshop, including the quantum low-degree test, question reduction by introspection, and answer reduction by the PCP technique. Building upon these techniques, we will construct a recursive gap-preserving compression procedure for quantum two-prover interactive proofs in the normal form and use it to give a proof of MIP* = RE.
Continuing in the line of MIP* = RE talks, I will discuss two of the tools involved in the result, introspection and PCP composition, which are used to compress large MIP* protocols into small MIP* protocols. I will introduce these tools in the context of prior work with Anand Natarajan showing that MIP* contains NEEXP.
Joint work with Zhengfeng Ji, Anand Natarajan, Thomas Vidick, and Henry Yuen
One of the most exciting consequences of the recent MIP* = RE result by Ji, Natarajan, Vidick, Wright, and Yuen is the resolution of Connes' embedding problem (CEP). Although this problem started out as a casual question about embeddings of von Neumann algebras, it has gained prominence due to its many equivalent and independently interesting formulations in operator theory and beyond. In particular, MIP* = RE resolves the CEP by resolving Tsirelson's problem, an equivalent formulation of CEP involving quantum correlation sets.
In this expository talk, I'll try to explain the connection between MIP* = RE and Connes' original problem directly, using the synchronous algebras of Helton, Meyer, Paulsen, and Satriano. I'll also explain how one of the remaining open problems on the algebraic side, the existence of a non-hyperlinear group, is related to the study of variants of MIP* with lower descriptive complexity.
This talk will be aimed primarily at physicists and computer scientists, although hopefully there will be something for everyone.
The derandomization of MA, the probabilistic version of NP, is a long standing open question. In this work, we connect this problem to a variant of another major problem: the quantum PCP conjecture. Our connection goes through the surprising quantum characterization of MA by Bravyi and Terhal. They proved the MA-completeness of the problem of deciding whether the groundenergy of a uniform stoquastic local Hamiltonian is zero or inverse polynomial. We show that the gapped version of this problem, i.e. deciding if a given uniform stoquastic local Hamiltonian is frustration-free or has energy at least some constant ϵ, is in NP. Thus, if there exists a gap-amplification procedure for uniform stoquastic Local Hamiltonians (in analogy to the gap amplification procedure for constraint satisfaction problems in the original PCP theorem), then MA = NP (and vice versa). Furthermore, if this gap amplification procedure exhibits some additional (natural) properties, then P = RP. We feel this work opens up a rich set of new directions to explore, which might lead to progress on both quantum PCP and derandomization.
Joint work with Dorit Aharonov.
As of late March 2020, Covid-19 has already secured its status among the most expansive pandemics of the last century. Covid-19 is caused by a coronavirus--SARS-CoV-2--that causes a severe respiratory disease in a fraction of those infected, and is typified by several important features: ability to infect cells of various kinds, contagiousness prior to the onset of symptoms, and a widely varying experience with disease across patient demographics.
In this seminar, I discuss the many lessons that the scientific community has learned from Covid-19, including insight from molecular evolution, cell biology, and epidemiology. I discuss the role of mathematical and computational modeling efforts in understanding the trajectory of the epidemic, and highlight modern findings and potential research questions at the interface of virology and materials science. I will also introduce areas of inquiry that might be of interest to the physics community.