Programme

TENTATIVE PROGRAMME    (subject to changes)

18/03 (Monday) 
 
09:15-09:30: Welcome
09:30-10:30: Luca De Feo -- Isogeny graphs in cryptography (1)
11:00-12:00: Luca De Feo -- Isogeny graphs in cryptography (2)
14:00-17:00: Damien Vergnaud -- Introduction to public-key cryptography
18:00-19:00: Luca De Feo -- Isogeny graphs in cryptography (3)

19/03 (Tuesday)
08:45-10:15: Damien Stehlé -- Introduction to lattice-based cryptography
10:45-12:15: Pierrick Gaudry --  Analysis of discrete logarithm algorithms
17:30-18:30: Chloe Martindale -- Cryptographic applications of isogeny graphs of genus 2 and 3 curves
18:45-19:45: Zvika Brakerski -- Worst-case hardness for LPN and cryptographic hashing via code smoothing

20/03 (Wednesday)
08:45-10:15: Léo Ducas -- Lattice algorithms
10:45-12:15: Pierrick Gaudry -- Analysis of discrete logarithm algorithms
17:30-18:30: Christophe Petit -- Isogeny-based cryptography: cryptanalysis results
18:45-19:45: Benjamin Wesolowski -- The discrete logarithm problem in finite fields of small characteristic
 
21/03 (Thursday)
08:45-10:15: Damien Stehlé -- Introduction to lattice-based cryptography
10:45-12:15: Léo Ducas -- Lattice algorithms 
17:30-18:30: Thijs Laarhoven -- SVP algorithms
 
22/03 (Friday) 
09:30-10:30: Wouter Castryck -- Structured variants of LWE
11:00-12:00: Shweta Agrawal -- Mathematical assumptions underlying code obfuscation

 

ABSTRACTS

Analysis of discrete logarithm algorithms: arithmetic, analytic and geometric tools (mini-course, Pierrick Gaudry).

The discrete logarithm problem in finite field is an important algorithmic question upon which the security of many cryptographic
constructions rely. All the efficient known methods rely more or less on the notion of smoothness of integers and polynomials and the idea of combining congruences. Unfortunately, the best estimates for the running times are currently unproven. After recalling the general setting, we choose a few selected topics. The emphasis is put on situations where mathematical tools are involved in the current attempts to obtain proofs that the algorithms indeed behave as expected.

Isogeny graphs in cryptography (mini-course, Luca De Feo).

Elliptic curves have been at the heart of public key cryptography for more than 30 years. However, most of what can currently be achieved with elliptic curves is going to be rendered useless by the advent of large scale quantum computers.
Isogeny graphs of elliptic curves have been proposed as a foundational tool for public key cryptography 20 years ago, already, but have only come to the forefront in the last 10 years, as a credible candidate for post-quantum key exchange and encryption.
Recent trends in research on isogeny-based cryptography include post-quantum signatures, alternative security assumptions, better quantum cryptanalyses, and new (non-post-quantum) primitives with applications to cryptocurrencies.
In this course I will present the basic theory of isogeny graphs, illustrate their algebraic properties, and introduce the simpler
cryptosystems.

Introduction to lattice-based cryptography (mini-course, Damien Stehlé).

Lattice-based cryptography is one of the most promising cryptographic approaches to obtain efficient cryptographic  primitives conjectured to resist quantum computations. For instance, in the second round of the NIST standardization process for post-quantum cryptography, 12 out of the 26 remaining candidates rely on lattices. In the area of lattice-based cryptography, the Learning With Errors problem (LWE) introduced by Regev in 2005 plays a central role.
In this course, I will present the Learning With Errors problem and its connection to Euclidean lattices. We will see how to construct a public-key encryption scheme and a digital signature scheme, with security relying on the LWE hardness assumption. I will also introduce variants of LWE over algebraic number fields, and the NTRU assumption. These allow for more efficient cryptographic constructions.

Reduction algorithms for lattices and ideal lattices (mini-course, Léo Ducas).

Lattice reduction is the task of finding a short basis of a lattice. It is an important tool in computational number theory, and in
cryptanalysis. In the first part of the this course, I will review the theory of lattice reduction (Hermite, Mikowski, Lagrange) and presents the associated algorithms LLL and BKZ. The second part of this course focuses on the special case of lattices
arising as ideals of cyclotomic number field. I will present recent algorithms that outperform generic ones on those special cases.

Mathematical assumptions underlying code obfuscation (Shweta Agrawal).

In recent times, there has been significant interest in constructing the cryptographic primitive of "indistinguishability bfuscation". Standard cryptographic hardness assumptions appear insufficient for this task, and we now have a variety of new mathematical conjectures to fill the gap. I will define the notion of indistinguishability obfuscation, briefly describe its importance and discuss the new mathematical conjectures, hard distributions, known attacks and open problems.

Cryptographic applications of isogeny graphs of genus 2 and 3 curves (Chloe Martindale).

We will introduce some background theory on genus 2 and 3 curves and look at the structures of the isogeny graphs that occur for some cases of these curves. We will show how knowledge of the graph structure can be used for both pre- and post-quantum cryptographic applications. This will cover some current research topics as well as some open questions in the area.

The discrete logarithm problem in finite fields of small characteristic (Benjamin Wesolowski). 

The presumed difficulty of computing discrete logarithms in certain groups is essential for the security of a number of communication protocols deployed today. One of the most classic choices for the underlying group is the multiplicative group of a finite field. However, when the characteristic of the field is small, recent algorithms allow to compute logarithms efficiently. These methods are only heuristic: they seem to always work, yet we do not know how to prove it. We will discuss the gap between the fastest provable and heuristic algorithms, and the recent directions towards a fast provable algorithm.

Structured variants of LWE (Wouter Castryck).

The celebrated LWE problem is about solving perturbed linear systems of equations over finite fields. During the last decade, several variations to this problem have been proposed in which the linear part carries extra structure, where the main goal is to speed up and compactify the cryptographic applications, without compromising the security (conjecturally). Along with these proposals came various suggestions for the error distribution from which the perturbation is to be sampled. We will give an overview of the existing "structured" variants to LWE, discuss their similarities and differences, and discuss some bad choices that can be made, which do affect the security.

Isogeny-based cryptography: cryptanalysis results (Christophe Petit).

In this talk we will survey known results on the security of isogeny-based protocols, particularly focusing on existing cryptanalysis approaches. We also aim to highlight what are the easy and remaining hard problems in this area.

Worst-case hardness for LPN and cryptographic hashing via code smoothing (Zvika Brakerski).

We present a worst case decoding problem whose hardness reduces to that of solving the Learning Parity with Noise (LPN) problem, in some parameter regime. Prior to this work, no worst case hardness result was known for LPN (as opposed to syntactically similar problems such as Learning with Errors). The caveat is that this worst case problem is only mildly hard and in particular admits a quasi-polynomial time algorithm, whereas the LPN variant used in the reduction requires extremely high noise rate of $1/2-1/\poly(n)$. Thus we can only show that ``very hard'' LPN is harder than some ``very mildly hard'' worst case problem. We note that LPN with noise $1/2-1/\poly(n)$ already implies symmetric cryptography.
 
Specifically, we consider the $(n,m,w)$-nearest codeword problem ($(n,m,w)$-NCP) which takes as input a generating matrix for a binary linear code in $m$ dimensions and rank $n$, and a target vector which is very close to the code (Hamming distance at most $w$), and asks to find the codeword nearest to the target vector. We show that for balanced (unbiased) codes and for relative error $w/m \approx {\log^2 n}/{n}$, $(n,m,w)$-NCP can be solved given oracle access to an LPN distinguisher with noise ratio $1/2-1/\poly(n)$.
 
Our proof relies on a smoothing lemma for codes which we show to have further implications: We show that $(n,m,w)$-NCP with the aforementioned parameters lies in the complexity class $SearchBPP^{SZK}$ (i.e.\ reducible to a problem
that has a statistical zero knowledge protocol) implying that it is unlikely to be $NP$-hard. We then show that the hardness of LPN with very low noise rate $\log^2(n)/n$ implies the existence of collision resistant hash functions (our aforementioned result implies that in this parameter regime LPN is also in $BPP^{SZK}$).
 
Joint work with Vadim Lyubashevsky, Vinod Vaikuntanathan and Daniel Wichs.

Finding a Nash Equilibrium is No Easier than Breaking Fiat-Shamir (Alon Rosen).

The Fiat-Shamir heuristic transforms a public-coin interactive proof into a non-interactive argument, by replacing the verifier with a cryptographic hash function that is applied to the protocol's transcript. Constructing hash functions for which this transformation is sound is a central and long-standing open question in cryptography.

We show that solving the END-OF-METERED-LINE problem is no easier than breaking the soundness of the Fiat-Shamir transformation when applied to the sumcheck protocol. In particular, if the transformed protocol is sound, then any hard problem in #P gives rise to a hard distribution in the class CLS, which is contained in PPAD. Our result opens up the possibility of sampling moderately-sized games for which it is hard to find a Nash equilibrium, by reducing the inversion of appropriately chosen one-way functions to #SAT.

Our main technical contribution is a stateful incrementally verifiable procedure that, given a SAT instance over n variables, counts the number of satisfying assignments. This is accomplished via an exponential sequence of small steps, each computable in time poly(n). Incremental verifiability means that each intermediate state includes a sumcheck-based proof of its correctness, and the proof can be updated and verified in time poly(n).

Joint work with Arka Rai Choudhuri, Pavel Hubacek, Chethan Kamath, Krzysztof Pietrzak, and Guy Rothblum.

Evolutionary lattice algorithms (Thijs Laarhoven).

The security of lattice-based cryptography relies on the hardness of lattice problems such as the shortest vector problem, and our inability of solving these problems efficiently with state-of-the-art lattice algorithms. For accurately assessing the security of these schemes, it is important to understand these underlying cryptanalytic algorithms well, and see how they relate to methods from other research areas. In this talk we will take a look at lattice problems from an Artificial Intelligence-based point of view, and discuss how various techniques and tools from evolutionary algorithms naturally appear in the current best methods for solving lattice problems in practice.

Online user: 1