Recently, I’ve been investigating computational notions of entropy and had to use the Leftover Hash Lemma. (A variant of this lemma is stated below) I first encountered the Lemma several years ago but didn’t have to use it for anything… until now!
The lemma is attributed to Impagliazzo, Levin, and Luby [1]. A corollary of the lemma is that one can convert a source of (high-enough) Rényi entropy into a distribution that is uniform (or close to uniform). Before stating the Lemma, I’ll discuss a few different notions of entropy, including the classic Shannon Entropy, min-entropy, max-entropy, and so on. See [2] for many different applications for the entropy measures.

Entropy Measures
Consider the random variable . We use
to denote that the element
is randomly drawn from
. Denote its support by Supp(
). Define the sample entropy of
with respect to
as
. The sample entropy measures how much randomness is present in the sample
when generated according to the law/density-function of
. Also, let
when
Supp(
). Then we can state the entropy measures in terms of the sample entropy:
- Shannon Entropy:
- Min-Entropy:
- Rényi Entropy:
- Max-Entropy:
How should we interpret these measures? The min-entropy can be seen as a worst-case measure of how “random” a random variable is. The Rényi entropy measure, intuitively, measures how “collision-resistant” a random variable is (i.e., think hash functions). In my opinion, max-entropy does not give much information, except for how large the support of a random variable is. These entropy measures are related by this inequality:
The inequality above is tight if and only if is uniformly distributed on its support. The statement of the lemma below uses universal hash functions. Here is a definition:
A function family is two-universal if
, the following holds:
.
The Lemma
Statement: Let be a random variable over
with
. Consider the two-universal function family
. Then for any
, the statistical distance between
and
is at most
.
One can interpret the statement above as saying that you can convert a random variable with high-enough Rényi entropy into a random variable that is very close to uniform.
References
[1] Impagliazzo, Russell; Levin, Leonid A.; Luby, Michael (1989). Pseudo-random Generation from one-way functions.
[2] Iftach Haitner and Salil Vadhan. The Many Entropies in One-Way Functions, pages 159–217. Springer International Publishing, 2017