Files

Abstract

In previous work, security results of decorrelation theory was based on the infinity-associated matrix norm. This enables to prove that decorrelation provides security against non-adaptive iterated attacks. In this paper we define a new matrix norm dedicated to adaptive chosen plaintext attacks. Similarly, we construct another matrix norm dedicated to chosen plaintext and ciphertext attacks. The formalism from decorrelation enables to manipulate the notion of best advantage for distinguishers so easily that we prove as a trivial consequence a somewhat intuitive theorem which says that the best advantage for distinguishing a random product cipher from a truly random permutation decreases exponentially with the number of terms. We show that several of the previous results on decorrelation extend with these new norms. In particular, we show that the Peanut construction (for instance the DFC algorithm) provides security against adaptive iterated chosen plaintext attacks with unchanged bounds, and security against adapted iterated chosen plaintext and ciphertext attacks with other bounds, which shows that it is actually super-pseudorandom. We also generalize the Peanut construction to any scheme instead of the Feistel one. We show that one only requires an equivalent to Luby-Rackoff's Lemma in order to get decorrelation upper bounds.

Details

Actions