We comment on the recently introduced Gauss-Bonnet gravity in four dimensions. We argue that it does not make sense to consider this theory to be defined by a set of D->4 solutions of the higher-dimensional Gauss-Bonnet gravity. We show that a well-defined D->4 limit of Gauss-Bonnet Gravity is obtained generalizing a method employed by Mann and Ross to obtain a limit of the Einstein gravity in D=2 dimensions. This is a scalar-tensor theory of the Horndeski type obtained by dimensional reduction methods. By considering simple spacetimes beyond spherical symmetry (Taub-NUT spaces) we show that the naive limit of the higher-dimensional theory to four dimensions is not well defined and contrast the resultant metrics with the actual solutions of the new theory. Theory and solutions in lower dimensions will also be briefly discussed.
In this talk, I will present recent progresses on the design and implementation of Homomorphic Encryption for Arithmetic of Approximate Numbers (HEAAN), which supports encrypted computation with errors.
I will also describe the advantages of HEAAN in practice as well as its bootstrapping method to achieve approximate fully homomorphic encryption.
The talk will be split in two parts: the first one will be a general introduction to Fully Homomorphic Encryption (FHE) while the second one will focus on the description of the TFHE scheme (https://eprint.iacr.org/2018/421.pdf).
In this talk, I will present an extension of Ducas, Lyubashesky and Prest instantiation of Gentry, Peikert and Vaikuntanathan (GPV) framework. More precisely, I will describe a larger class of trapdoored NTRU lattices that can be used to extend the practical parameter sets for some cryptographic schemes. Indeed, as shown by NIST candidates such as Kyber or Dilihtium, relying on module lattices and the relevant hard problems can allow for some meaningful trade-offs between security and efficiency.
I will explain the regime of parameters that are needed to generate "almost optimal" (in an asymptotic sense) yet practical trapdoored NTRU modules. In particular, I will discuss the notion of hardness underlying this instantiation, and highlight some new results giving strong backups toward the computational and decisional hardness assumptions behind these trapdoors. On the more practical side, I will briefly compare the potency of a new signature scheme relying on these trapdoors to some of the NIST Round 2 candidates.
Based on a joint work with Chitchanok Chuengsatiansup, Thomas Prest, Damien Stehlé and Keita Xagawa.
NTRU lattices use bases with a compact representation as polynomials; elements of a lattice of dimension $2n$ are encoded as a pair of polynomials with integer coefficients and taken modulo a given irreducible polynomial of degree $n$, usually $X^n+1$ (with $n$ a power of two). The private key is the knowledge of two polynomials $(f,g)$ with small coefficients. Some cryptographic algorithms based on NTRU lattices, in particular NTRUSign and Falcon, require a complete basis, i.e. two extra polynomials $F$ and $G$, also with small coefficients, that fulfill the NTRU equation: $fG - gF = q$, for a given small integer $q$. While generating $f$ and $g$ is simple, computing matching $F$ and $G$ requires considerable resources in CPU and RAM, making key pair generation inapplicable to small constrained systems.
In this presentation, we will show how the NTRU equation can be solved much more efficiently by using the field norm over a tower of fields. We first show a novel method for computing the resultant of a small polynomial with the modulus $X^n+1$; we will then draw on that method to solve the NTRU equation in a resource-efficient way. When applied to the Falcon signature system, RAM cost of key pair generation was reduced by a factor of more than 100 (from about 3 MB down to less than 30 kB) and CPU cost was also reduced by a similar factor (from about 2 seconds to 21 milliseconds). This allows the implementation of the whole of Falcon, including key pair generation, on embedded systems, with constant-time code (free of timing-based side channels).
The Fiat-Shamir transformation is used to build secure signatures in the random oracle model. Unfortunately, existing proof techniques are incapable of proving the security of Fiat-Shamir in the quantum setting. The problem stems from (1) the difficulty of quantum rewinding, and (2) the inability of current techniques to adaptively program random oracles in the quantum setting. In this work, we show how to overcome these limitations. As an application, we show that existing lattice signatures based on Fiat-Shamir are secure without any modifications.
Light dark photons are subject to various plasma effects, such as Debye screening and resonant oscillations, which can lead to a more complex cosmological evolution than is experienced by conventional cold dark matter candidates. Maintaining a consistent history of dark photon dark matter requires ensuring that the super-thermal abundance present in the early Universe (i) does not deviate significantly after the formation of the CMB, and (ii) does not excessively leak into the Standard Model plasma after BBN. In this talk, I will clarify the roles of resonant and non-resonant absorption and address some implications of plasma inhomogeneities on resonant transitions.
We introduce a new technique called ‘Measure-Rewind- Measure’ (MRM) to achieve tighter security proofs in the quantum random oracle model (QROM). We first apply our MRM technique to derive a new security proof for a variant of the ‘double-sided’ quantum One- Way to Hiding Lemma (O2H) of Bindel et al. [TCC 2019] which, for the first time, avoids the square-root advantage loss in the security proof. In particular, it bypasses a previous ‘impossibility result’ of Jiang, Zhang and Ma [IACR eprint 2019]. We then apply our new O2H Lemma to give a new tighter security proof for the Fujisaki-Okamoto (FO) transform for constructing a strong (IND-CCA) Key Encapsulation Mechanism (KEM) from a weak (IND-CPA) public-key encryption scheme satisfying a mild injectivity assumption.
In the context of the NIST competition, the last three years have seen a lot of research to be invested in the construction of public-key primitives that remain actively secure even in the presence of quantum adversaries. All current NIST proposals follow the approach to achieve active security by first constructing a weaker primitive, and then applying a variant of the Fujisaki-Okamoto transformation.
The Fujisaki-Okamoto transformation and its variants turns any scheme with a weak security level into a scheme with the desired active security level, in a generic way. All of its variants, however, are constructed relative to hash functions, and quantum attackers might interact with these hash functions in a more sophisticated way than classical attackers would be capable of. This possibility is reflected in the security bounds that have been proven for quantum adversaries: They are less tight than in the classical setting.
In this context, tight bounds mean that the derived scheme is as secure as the underlying building block, whereas less tight results relate the derived scheme's security to the weaker building block in a less immediate manner. To still achieve a sufficent level of security for the derived scheme, the underlying primitive's level of security would have to be scaled up, leading to less efficient schemes. Gradual progress towards tighter security bounds has been made within the last years, but it comes at the price of additional restrictions for the weaker building block. This talk offers a survey of knowledge with regards to how directly active security can be derived from the weaker building block, when assuming attackers that are quantum.
https://eprint.iacr.org/2020/292
Dana Dachman-Soled and Léo Ducas and Huijing Gong and Mélissa Rossi
Abstract: We propose a framework for cryptanalysis of lattice-based schemes, when side information---in the form of ``hints''--- about the secret and/or error is available. Our framework generalizes the so-called primal lattice reduction attack, and allows the progressive integration of hints before running a final lattice reduction step. Our techniques for integrating hints include sparsifying the lattice, projecting onto and intersecting with hyperplanes, and/or altering the distribution of the secret vector. Our main contribution is to propose a toolbox and a methodology to integrate such hints into lattice reduction attacks and to predict the performance of those lattice attacks with side information.
While initially designed for side-channel information, our framework can also be used in other cases: exploiting decryption failures, or simply exploiting constraints imposed by certain schemes (LAC, Round5, NTRU), that were previously not known to (slightly) benefit from lattice attacks.
We implement a Sage 9.0 toolkit to actually mount such attacks with hints when computationally feasible, and to predict their performances on larger instances. We provide several end-to-end application examples, such as an improvement of a single trace attack on Frodo by Bos et al (SAC 2018). Contrary to ad-hoc practical attacks exploiting side-channel leakage, our work is a generic way to estimate security loss even given very little side-channel information.
Category / Keywords: public-key cryptography / LWE, NTRU, Lattice reduction, Cryptanalysis, Side-channels analysis, decryption failures.
Date: received 5 Mar 2020, last revised 6 Mar 2020
Contact author: danadach at ece umd edu,l ducas@cwi nl,gong@cs umd edu,melissa rossi@ens fr
Available format(s): PDF | BibTeX Citation
Version: 20200306:193806 (All versions of this report)
Short URL: ia.cr/2020/292