Displaying 313 - 324 of 384
Format results
-
-
-
-
New Techniques for Practical Lattice-Based Zero-Knowledge
Gregor Seiler, IBM Research - Zurich and ETH -
Signatures, Commitments, Zero-Knowledge, and Applications
Vadim Lyubashevsky, IBM Research- Zurich -
-
Not All Benchmarks Are Created Equal
Robin Blume-Kohout (Sandia National Labs) -
Cycle Benchmarking: The New Paradigm for Assessing All Relevant Errors and Error Correlations Under Specific Quantum Operation on Large-Scale Quantum Computers
Joseph Emerson (University of Waterloo) -
Cross-Platform Verification of Intermediate Scale Quantum Devices with Randomized Measurements
Andreas Elben (Austrian Academy of Sciences) -
MIP* = RE: Putting Everything Together
Zhengfeng Ji (University of Technology Sydney) -
-
-
CCA encryption in the QROM II
Ron Steinfeld, Monash UniversityWe introduce a new technique called ‘Measure-Rewind- Measure’ (MRM) to achieve tighter security proofs in the quantum random oracle model (QROM). We first apply our MRM technique to derive a new security proof for a variant of the ‘double-sided’ quantum One- Way to Hiding Lemma (O2H) of Bindel et al. [TCC 2019] which, for the first time, avoids the square-root advantage loss in the security proof. In particular, it bypasses a previous ‘impossibility result’ of Jiang, Zhang and Ma [IACR eprint 2019]. We then apply our new O2H Lemma to give a new tighter security proof for the Fujisaki-Okamoto (FO) transform for constructing a strong (IND-CCA) Key Encapsulation Mechanism (KEM) from a weak (IND-CPA) public-key encryption scheme satisfying a mild injectivity assumption. -
CCA encryption in the QROM I
Kathrin Hövelmanns, RUBIn the context of the NIST competition, the last three years have seen a lot of research to be invested in the construction of public-key primitives that remain actively secure even in the presence of quantum adversaries. All current NIST proposals follow the approach to achieve active security by first constructing a weaker primitive, and then applying a variant of the Fujisaki-Okamoto transformation. The Fujisaki-Okamoto transformation and its variants turns any scheme with a weak security level into a scheme with the desired active security level, in a generic way. All of its variants, however, are constructed relative to hash functions, and quantum attackers might interact with these hash functions in a more sophisticated way than classical attackers would be capable of. This possibility is reflected in the security bounds that have been proven for quantum adversaries: They are less tight than in the classical setting. In this context, tight bounds mean that the derived scheme is as secure as the underlying building block, whereas less tight results relate the derived scheme's security to the weaker building block in a less immediate manner. To still achieve a sufficent level of security for the derived scheme, the underlying primitive's level of security would have to be scaled up, leading to less efficient schemes. Gradual progress towards tighter security bounds has been made within the last years, but it comes at the price of additional restrictions for the weaker building block. This talk offers a survey of knowledge with regards to how directly active security can be derived from the weaker building block, when assuming attackers that are quantum. -
LWE with Side Information: Attacks and Concrete Security Estimation
Leo Ducas, CWIhttps://eprint.iacr.org/2020/292 Dana Dachman-Soled and Léo Ducas and Huijing Gong and Mélissa Rossi Abstract: We propose a framework for cryptanalysis of lattice-based schemes, when side information---in the form of ``hints''--- about the secret and/or error is available. Our framework generalizes the so-called primal lattice reduction attack, and allows the progressive integration of hints before running a final lattice reduction step. Our techniques for integrating hints include sparsifying the lattice, projecting onto and intersecting with hyperplanes, and/or altering the distribution of the secret vector. Our main contribution is to propose a toolbox and a methodology to integrate such hints into lattice reduction attacks and to predict the performance of those lattice attacks with side information. While initially designed for side-channel information, our framework can also be used in other cases: exploiting decryption failures, or simply exploiting constraints imposed by certain schemes (LAC, Round5, NTRU), that were previously not known to (slightly) benefit from lattice attacks. We implement a Sage 9.0 toolkit to actually mount such attacks with hints when computationally feasible, and to predict their performances on larger instances. We provide several end-to-end application examples, such as an improvement of a single trace attack on Frodo by Bos et al (SAC 2018). Contrary to ad-hoc practical attacks exploiting side-channel leakage, our work is a generic way to estimate security loss even given very little side-channel information. Category / Keywords: public-key cryptography / LWE, NTRU, Lattice reduction, Cryptanalysis, Side-channels analysis, decryption failures. Date: received 5 Mar 2020, last revised 6 Mar 2020 Contact author: danadach at ece umd edu,l ducas@cwi nl,gong@cs umd edu,melissa rossi@ens fr Available format(s): PDF | BibTeX Citation Version: 20200306:193806 (All versions of this report) Short URL: ia.cr/2020/292 -
New Techniques for Practical Lattice-Based Zero-Knowledge
Gregor Seiler, IBM Research - Zurich and ETHNo abstract available. -
Signatures, Commitments, Zero-Knowledge, and Applications
Vadim Lyubashevsky, IBM Research- ZurichNo abstract available. -
Efficient Learning of Pauli Channels
Steve Flammia (University of Sydney)Pauli channels are ubiquitous in quantum information, both as a dominant noise source in many computing architectures and as a practical model for analyzing error correction and fault tolerance. Here we prove several results on efficiently learning Pauli channels, and more generally the Pauli projection of a quantum channel. We first derive a procedure for learning a Pauli channel on n qubits to a fixed relative precision with O(n 2^n) measurements. For a Pauli channel with only s nonzero error rates, we show how to learn it with only O(s n) measurements. Finally, we show that when the Pauli channel is given by a Markov field with at most k-local correlations, we can learn an entire n-qubit Pauli channel with only O(n^2 log n) measurements, which is efficient in the number of qubits. These results enable a host of applications beyond just characterizing noise in a large-scale quantum system: they pave the way to tailoring quantum codes, optimizing decoders, and customizing fault tolerance procedures to suit a particular device. Joint work with Robin Harper, Joel Wallman, and Wenjun Yu, arXiv:1907.13022, arXiv:1907.12976. -
Not All Benchmarks Are Created Equal
Robin Blume-Kohout (Sandia National Labs)Testbed-class quantum computers -- fully programmable 5-50 qubit systems -- have burst onto the scene in the past few years. The associated surge in funding, hype, and commercial activity has spurred interest in "benchmarks" for assessing their performance. Unsurprisingly, this has generated both a number of scientifically interesting ideas *and* a lot of confusion and kerfuffle. I will try to explain the state of play in this field -- known historically as "quantum characterization, verification, and validation (QCVV)" and more recently and generally as "quantum performance assessment" -- by briefly reviewing its history, explaining the different categories of benchmarks and characterization protocols, and identifying what they're good for. The overarching message of my talk will be these are distinct tools in a diverse toolbox -- almost every known protocol and benchmark really measures a distinct and particular thing, and we probably need *more* of them, not fewer. -
Cycle Benchmarking: The New Paradigm for Assessing All Relevant Errors and Error Correlations Under Specific Quantum Operation on Large-Scale Quantum Computers
Joseph Emerson (University of Waterloo)Cycle benchmarking is a new approach for scalable, complete and efficient error diagnostics that will be essential to understanding and improving quantum computing performance from the NISQ era to fault-tolerance. Cycle benchmarking is born from ideas of randomized benchmarking and exposes tomographic methods as impractical and obsolete. When combined with randomized compiling, cycle benchmarking can identify the full impact of errors and error correlations for any (parallel) gate-combination of interest. I will show cycle benchmarking data from experimental implementations on multi-qubit superconducting qubit and ion trap quantum computers revealing that: (1) in leading platforms, cross-talk and other error correlations can be much more severe than expected, even many order-of-magnitude larger than expected based on independent error models; (2) these cross-talk errors induce errors on other qubits (e.g., idling qubits) that are an order of magnitude larger than the errors on the qubits in the domain of the gate operation; and thus (3) the notion of "elementary gate error rates" is not adequate for assessing quantum computing operations and cycle benchmarking provides the tool to provide an accurate assessment. I will then discuss how the aggregate error rates measured under cycle benchmarking can be applied to provide a practical bound on the accuracy of applications in what I call the "quantum discovery regime" where quantum solutions can no longer be checked via HPCs. -
Cross-Platform Verification of Intermediate Scale Quantum Devices with Randomized Measurements
Andreas Elben (Austrian Academy of Sciences)Recently, protocols based on statistical correlations of randomized measurements were developed for probing and verifying engineered quantum many-body systems. After a general introduction in the context of Renyi entropies, I focus in this talk on the cross-platform verification of quantum computers and simulators by means of fidelity measurements. I show how to measure the overlap between (reduced) density matrices, and thus a (mixed-state) fidelity of two quantum states prepared on separate experimental platforms. The protocol requires only local measurements in randomized product bases and classical communication between the devices. As a proof-of-principle, I present the measurement of experiment-theory fidelities for entangled 10-qubit quantum states in a trapped ion quantum simulator. To conclude, I will present further applications of randomized measurements for probing quantities beyond standard observables, such as out-of-time-ordered correlation functions. -
MIP* = RE: Putting Everything Together
Zhengfeng Ji (University of Technology Sydney)In this talk, we will discuss the overall framework for combining many of the techniques established in previous talks of this workshop, including the quantum low-degree test, question reduction by introspection, and answer reduction by the PCP technique. Building upon these techniques, we will construct a recursive gap-preserving compression procedure for quantum two-prover interactive proofs in the normal form and use it to give a proof of MIP* = RE. -
MIP* = RE Part 2: PCPs and Introspection
John Wright (Caltech)Continuing in the line of MIP* = RE talks, I will discuss two of the tools involved in the result, introspection and PCP composition, which are used to compress large MIP* protocols into small MIP* protocols. I will introduce these tools in the context of prior work with Anand Natarajan showing that MIP* contains NEEXP. Joint work with Zhengfeng Ji, Anand Natarajan, Thomas Vidick, and Henry Yuen -
The Algebraic Side of MIP* = RE
William Slofstra (University of Waterloo)One of the most exciting consequences of the recent MIP* = RE result by Ji, Natarajan, Vidick, Wright, and Yuen is the resolution of Connes' embedding problem (CEP). Although this problem started out as a casual question about embeddings of von Neumann algebras, it has gained prominence due to its many equivalent and independently interesting formulations in operator theory and beyond. In particular, MIP* = RE resolves the CEP by resolving Tsirelson's problem, an equivalent formulation of CEP involving quantum correlation sets. In this expository talk, I'll try to explain the connection between MIP* = RE and Connes' original problem directly, using the synchronous algebras of Helton, Meyer, Paulsen, and Satriano. I'll also explain how one of the remaining open problems on the algebraic side, the existence of a non-hyperlinear group, is related to the study of variants of MIP* with lower descriptive complexity. This talk will be aimed primarily at physicists and computer scientists, although hopefully there will be something for everyone.