What is randomness? The interplay between alpha-entropies, total variation and guessing
Abstract
In many areas of computer science, it is of primary importance to assess the randomness of a certain variable X. Many different criteria can be used to evaluate randomness, possibly after observing some disclosed data. A “sufficiently random” X is often described as “entropic”. Indeed, Shannon’s entropy is known to provide a resistance criterion against modeling attacks. More generally one may consider the Rényi α-entropy where Shannon’s entropy, collision entropy and min-entropy are recovered as particular cases α = 1, 2 and +∞, respectively. Guess work or guessing entropy is also of great interest in relation to α-entropy.
On the other hand, many applications rely instead on the “statistical distance”, a.k.a. total variation distance to the uniform distribution. This criterion is particularly important because a very small distance ensures that no statistical test can effectively distinguish between the actual distribution and the uniform distribution.
We establish optimal lower and upper bounds between α-entropy, guessing entropy on one hand, and error probability and total variation distance to the uniform on the other. In this context, it turns out that the best known “Pinsker inequality” and recent “reverse Pinsker inequalities” are not necessarily optimal. We recover or improve previous Fano-type and Pinsker-type inequalities used for several applications.
Origin : Files produced by the author(s)