I try to always consider the classical alternative to any quantum computation or quantum information-theoretic primitive. This is a deliberate choice. I am not a pure quantum theorist in the sense of studying quantum models in isolation, nor am I interested in quantum advantage as an article of faith. Rather, my goal is to delineate (as precisely as possible) the boundary between what classical and quantum theories can guarantee, especially when privacy guarantees are composed over time, across mechanisms, or between interacting systems.
In the context of privacy, composition is where theory meets reality: real systems are never single-shot. They involve repeated interactions, adaptive adversaries, and layered mechanisms. Quantum information introduces new phenomena (entanglement, non-commutativity, and measurement disturbance) that complicate classical intuitions about composition. At the same time, classical privacy theory has developed remarkably robust tools that often remain surprisingly competitive, even when quantum resources are allowed.
The guiding question of this post is therefore not “What can quantum systems do that classical ones cannot?” but rather:
When privacy guarantees are composed, what genuinely changes in the transition from classical to quantum. And what does not?
By keeping classical alternatives explicitly in view, we can better understand which privacy phenomena are inherently quantum, which are artifacts of modeling choices, and which reflect deeper structural principles that transcend the classical vs. quantum divide.
Classical Composition of Differential Privacy
Recall the definition of differential privacy:
Approximate Differential Privacy Let denote the data universe and let be the set of datasets. Two datasets are called neighbors, denoted , if they differ in the data of exactly one individual.
A (possibly randomized) algorithm is said to be -differentially private if for all neighboring datasets and all measurable events , .
It has been shown in a few references/textbooks that basic composition holds for differential privacy. We recall the statement:
Theorem (Basic sequential composition for approximate differential privacy) Fix . For each let be a (possibly randomized) algorithm that, on input a dataset , outputs a random variable in some measurable output space . Assume that for every , is -differentially private.
Define the -round interactive (sequential) mechanism as follows: on input , for , it outputs where denotes the th mechanism possibly chosen adaptively as a (measurable) function of the past transcript . Let denote the full transcript in the product space .
Then is -differentially private.
In particular, if and for all , then is -differentially private.
What happens in the quantum setting?
Composition of Quantum Differential Privacy
A central “classical DP intuition” we have already set up is: once you have per-step privacy bounds, you can stack them, and in the simplest form the parameters add. e.g., adds across rounds. In the quantum world, however, DP is commonly defined operationally against arbitrary measurements; and this makes the usual classical composition proofs, which rely on a scalar privacy-loss random variable, no longer directly applicable.
In a recent work, Theshani Nuradha and I show two complementary points, one negative (a barrier) and one positive:
Composition can fail in full generality for approximate QDP (POVM-based). We show that if you allow correlated joint implementations when combining mechanisms/channels, then “classical-style” composition need not hold: even channels that are “individually perfectly private” can lose privacy drastically when composed in this fully general way.
Composition can be restored under explicit structural assumptions. Then we identify a regime where you can recover clean composition statements: tensor-product channels acting on product neighboring inputs. In that regime, we propose a quantum moments accountant built from an operator-valued notion of privacy loss and a matrix moment-generating function (MGF).
How we get operational guarantees (despite a key obstacle). A subtlety we highlight: the Rényi-type divergence we consider for the moments accountant does not satisfy a data-processing inequality. Nevertheless, we prove that controlling appropriate moments is still enough to upper bound measured Rényi divergence, which does correspond to operational privacy against arbitrary measurements.
End result: advanced-composition-style behavior (in the right setting). Under those structural assumptions, the paper obtains advanced-composition-style bounds with the same leading-order behavior as in classical DP. i.e., you can once again reason modularly about long pipelines, but only after carefully stating what “composition” means (i.e., joint, tensor-product, factorized) physically/operationally in the quantum setting.
Check out the paper. Feedback/comments are welcome!
How would you prove you’ve solved a Sudoku puzzle without revealing the solution? You can construct a zero-knowledge proof showing the grid satisfies Sudoku rules (unique numbers per row, column, and box).
This semester, one of my projects involves zero-knowledge proofs. I’ll try to explain what I’ve learned about this amazing concept and its variants (with particular attention to statistical zero-knowledge). Shout out to Boaz’s cryptography class. Zero-knowledge proofs have found profound use in blockchain technology, authentication, privacy, and so on.
Definition
Intuition: Imagine someone wants to prove they know the solution to a complex problem (e.g., a puzzle or how Trump was going to win the election) without revealing the solution. They use a process that convinces the verifier they have the solution without showing it.
A zero-knowledge proof (ZKP) is a method by which one party (the prover) can demonstrate to another party (the verifier) that a specific statement is true without revealing any additional information about the statement itself.
Key Properties of Zero-Knowledge Proofs
There are a few variants of zero-knowledge. Here is one:
Completeness: If the statement is true, a honest prover can convince the verifier of its truth:
If and the prover knows a valid witness , then for all , the honest verifier will accept the proof with probability at least :
Soundness: If the statement is false, no dishonest prover can convince the verifier that it is true (except with an extremely small probability).
If , then no cheating prover can convince the honest verifier to accept, except with negligible probability: where is a negligible function.,
Zero-Knowledge: The verifier learns nothing other than the fact that the statement is true. No information about how or why the statement is true is revealed.
For every polynomial-time verifier , there exists a polynomial-time simulator such that the output of is computationally indistinguishable from the interaction between and on input :
Type
Definition
Guarantee
Perfect Zero-Knowledge
Real and simulated distributions are identical.
Holds even against computationally unbounded verifiers.
Statistical Zero-Knowledge
Real and simulated distributions are statistically close (negligible difference).
Holds against computationally unbounded verifiers.
Computational Zero-Knowledge
Real and simulated distributions are computationally indistinguishable for polynomial-time verifiers.
Holds only against computationally bounded verifiers.
Interactive Zero-Knowledge Proofs
These involve a back-and-forth interaction between the prover and the verifier.
Graph Isomorphism: Prove that two graphs are isomorphic (structurally identical) without revealing the isomorphism itself. Alice proves to Bob that she knows a way to relabel the nodes of graph to match graph .
Hamiltonian Cycle Problem: Prove that a graph contains a Hamiltonian cycle (a path visiting every vertex exactly once) without revealing the actual cycle.
Non-Interactive Zero-Knowledge Proofs (NIZKs)
These eliminate the need for interaction, enabling the prover to generate a single proof that can be verified multiple times.
zk-SNARKs (Succinct Non-Interactive Arguments of Knowledge): Widely used in blockchain systems like Zcash to validate transactions while keeping them private. Example: Prove that a transaction is valid (inputs equal outputs) without disclosing amounts or participants.
zk-STARKs (Scalable Transparent Arguments of Knowledge): A transparent alternative to zk-SNARKs that avoids the need for trusted setups and is more scalable. Example: Used in Ethereum Layer-2 solutions like StarkNet to bundle transaction proofs.
The Fiat-Shamir Heuristic technique to convert interactive proofs into non-interactive ones using cryptographic hash functions.
Schnorr Protocol: A proof that you know a discrete logarithm of a number without revealing the logarithm itself. Example: Prove ownership of a private key without exposing it (used in Schnorr signatures).
Example Use Cases
Zero-knowledge proofs (ZKPs) come in different forms, with specific examples being applied across theoretical and practical scenarios. Below are some notable examples:
1. Commit-and-Prove Protocols
Combine commitments (binding and hiding data) with zero-knowledge proofs (for example Pedersen Commitments). Prove that you committed to a number without revealing but can later open the commitment to verify .
2. Bulletproofs
Efficient range proofs that demonstrate a value lies within a specific range without revealing the value. Example: Used in Monero to ensure transaction amounts are positive without disclosing the actual amounts.
3. Proofs in Cloud Computing
Proof of Retrievability: Prove a cloud provider stores your data without downloading it. Example: Used in decentralized storage systems like Filecoin.
Proof of Computation: Demonstrate the correctness of outsourced computation without revealing inputs or outputs.
4. Secure Voting Protocols
Homomorphic Encryption-Based Proofs: Prove a vote is valid (e.g., within a candidate set) without revealing the voter’s choice.
5. Knowledge of a Password
Example: Authenticate to a server by proving knowledge of a password without transmitting it. SRP Protocol (Secure Remote Password): Verifies a user knows a password without sending the password itself.
Perfect Zero-Knowledge
Perfect Zero-Knowledge is a stronger version of zero-knowledge where the verifier cannot distinguish between the interaction with the actual prover and the simulated interaction, even with unlimited computational power. In other words, the simulator’s output is statistically identical to the real interaction transcript, not just computationally indistinguishable.
Formal Definition
Let be a proof system for a language . The proof system is perfect zero-knowledge if for every polynomial-time verifier , there exists a probabilistic polynomial-time simulator such that for every :
where:
is the transcript of the interaction between and on input ,
is the simulated transcript generated by for the same input .
This implies that the probability distributions of the transcripts from the real interaction and the simulated interaction are exactly the same.
Key Features of Perfect Zero-Knowledge
Statistical Indistinguishability: The output of the simulator is statistically indistinguishable from the real transcript, meaning the difference between the two distributions is exactly zero.
Stronger Privacy Guarantees: Since the guarantee holds even against verifiers with infinite computational power, it is stronger than computational zero-knowledge, where the indistinguishability only holds for polynomial-time adversaries.
Example of Perfect Zero-Knowledge
The classic Graph Isomorphism Zero-Knowledge Protocol is a perfect zero-knowledge protocol:
A prover shows two graphs are isomorphic without revealing the actual isomorphism.
The verifier cannot distinguish between a genuine interaction and a simulated one, even with infinite computational power, making it perfect zero-knowledge.
Computational Zero-Knowledge
Computational Zero-Knowledge is a type of zero-knowledge proof where the verifier cannot distinguish between the actual interaction with the prover and the output of a simulator, provided the verifier has limited (polynomial-time) computational power.
This means that the zero-knowledge property relies on the computational infeasibility of distinguishing between the two scenarios, often based on cryptographic hardness assumptions (e.g., the difficulty of factoring large numbers or solving discrete logarithms).
Formal Definition
Let be a proof system for a language . The system is computational zero-knowledge if for every probabilistic polynomial-time (PPT) verifier , there exists a PPT simulator such that for every , the distributions: and are computationally indistinguishable. That is, no polynomial-time distinguisher can tell apart the real interaction and the simulated interaction with non-negligible probability.
Key Features of Computational Zero-Knowledge
Computational Indistinguishability: The zero-knowledge property holds against adversaries with limited computational power (polynomial-time distinguishers). If the verifier were computationally unbounded, they might be able to differentiate the two distributions.
Cryptographic Assumptions: Computational zero-knowledge often relies on assumptions like:
The infeasibility of factoring large integers.
The hardness of the discrete logarithm problem.
Other complexity-theoretic assumptions.
Relaxed Privacy Guarantees: Unlike perfect zero-knowledge, where the simulated and real distributions are statistically identical, computational zero-knowledge only guarantees privacy against computationally bounded adversaries.
Examples of Computational Zero-Knowledge
zk-SNARKs: Used in blockchain protocols like Zcash to ensure transaction validity without revealing sensitive details. The zero-knowledge property here relies on computational assumptions.
Interactive Proofs with Commitment Schemes: Many zero-knowledge protocols use cryptographic commitments (e.g., Pedersen commitments) to hide information during the proof, ensuring the verifier cannot extract more data computationally.
Real-World Importance
Computational zero-knowledge is widely used in practical applications, such as:
Cryptocurrencies (e.g., Zcash, zkRollups).
Authentication protocols.
Privacy-preserving identity verification.
It strikes a balance between strong privacy guarantees and computational efficiency, making it suitable for real-world cryptographic systems.
Statistical Zero-Knowledge
Statistical Zero-Knowledge (SZK) is a type of zero-knowledge proof where the verifier cannot distinguish between the real interaction with the prover and the output of a simulator, even with unlimited computational power. The key difference from perfect zero-knowledge is that the two distributions (real and simulated) are not identical but are statistically close, meaning the difference between them is negligible.
Formal Definition
Let be a proof system for a language . The system is statistical zero-knowledge if for every probabilistic polynomial-time (PPT) verifier , there exists a PPT simulator such that for every , the output distributions: and are statistically indistinguishable. This means the statistical distance (or total variation distance) between the two distributions is negligible:
where is a negligible function of the input size.
Key Features of Statistical Zero-Knowledge
Statistical Indistinguishability: The difference between the real and simulated transcripts is negligibly small, even for verifiers with unlimited computational power.
Weaker than Perfect Zero-Knowledge: Perfect zero-knowledge requires the distributions to be exactly identical, while statistical zero-knowledge allows for a negligible difference.
Stronger than Computational Zero-Knowledge: Computational zero-knowledge only guarantees indistinguishability for polynomial-time adversaries, whereas statistical zero-knowledge holds against adversaries with unlimited computational power.
No Dependence on Cryptographic Assumptions: SZK is typically not reliant on computational hardness assumptions, unlike computational zero-knowledge.
Examples of Statistical Zero-Knowledge
Quadratic Residuosity Problem: Prove that a number is a quadratic residue modulo (a composite number) without revealing the factorization of . The simulator can generate transcripts statistically indistinguishable from those produced during the real interaction.
Graph Isomorphism Problem: Prove that two graphs and are isomorphic without revealing the isomorphism. The verifier’s view of the interaction can be statistically simulated.
Real-World Applications
While SZK is less common in practical applications compared to computational zero-knowledge, it has theoretical importance in cryptographic protocol design and scenarios where absolute guarantees against powerful adversaries are required.
Recently, I’ve been investigating computational notions of entropy and had to use the Leftover Hash Lemma. (A variant of this lemma is stated below) I first encountered the Lemma several years ago but didn’t have to use it for anything… until now!
The lemma is attributed to Impagliazzo, Levin, and Luby [1]. A corollary of the lemma is that one can convert a source of (high-enough) Rényi entropy into a distribution that is uniform (or close to uniform). Before stating the Lemma, I’ll discuss a few different notions of entropy, including the classic Shannon Entropy, min-entropy, max-entropy, and so on. See [2] for many different applications for the entropy measures.
Entropy Measures
Consider the random variable . We use to denote that the element is randomly drawn from . Denote its support by Supp(). Define the sample entropy of with respect to as . The sample entropy measures how much randomness is present in the sample when generated according to the law/density-function of . Also, let when Supp(). Then we can state the entropy measures in terms of the sample entropy:
Shannon Entropy:
Min-Entropy:
Rényi Entropy:
Max-Entropy:
How should we interpret these measures? The min-entropy can be seen as a worst-case measure of how “random” a random variable is. The Rényi entropy measure, intuitively, measures how “collision-resistant” a random variable is (i.e., think hash functions). In my opinion, max-entropy does not give much information, except for how large the support of a random variable is. These entropy measures are related by this inequality:
The inequality above is tight if and only if is uniformly distributed on its support. The statement of the lemma below uses universal hash functions. Here is a definition:
A function family is two-universal if , the following holds: .
The Lemma
Statement: Let be a random variable over with . Consider the two-universal function family . Then for any , the statistical distance between and is at most .
One can interpret the statement above as saying that you can convert a random variable with high-enough Rényi entropy into a random variable that is very close to uniform.
References
[1] Impagliazzo, Russell; Levin, Leonid A.; Luby, Michael (1989). Pseudo-random Generation from one-way functions.
[2] Iftach Haitner and Salil Vadhan. The Many Entropies in One-Way Functions, pages 159–217. Springer International Publishing, 2017