Composition of Privacy Guarantees: Classical and Quantum

I try to always consider the classical alternative to any quantum computation or quantum information-theoretic primitive. This is a deliberate choice. I am not a pure quantum theorist in the sense of studying quantum models in isolation, nor am I interested in quantum advantage as an article of faith. Rather, my goal is to delineate (as precisely as possible) the boundary between what classical and quantum theories can guarantee, especially when privacy guarantees are composed over time, across mechanisms, or between interacting systems.

In the context of privacy, composition is where theory meets reality: real systems are never single-shot. They involve repeated interactions, adaptive adversaries, and layered mechanisms. Quantum information introduces new phenomena (entanglement, non-commutativity, and measurement disturbance) that complicate classical intuitions about composition. At the same time, classical privacy theory has developed remarkably robust tools that often remain surprisingly competitive, even when quantum resources are allowed.

The guiding question of this post is therefore not “What can quantum systems do that classical ones cannot?” but rather:

When privacy guarantees are composed, what genuinely changes in the transition from classical to quantum. And what does not?

By keeping classical alternatives explicitly in view, we can better understand which privacy phenomena are inherently quantum, which are artifacts of modeling choices, and which reflect deeper structural principles that transcend the classical vs. quantum divide.

Classical Composition of Differential Privacy

Recall the definition of differential privacy:

Approximate Differential Privacy
Let \mathcal{X} denote the data universe and let \mathcal{D} \subseteq \mathcal{X}^n be the set of datasets.
Two datasets D,D'\in\mathcal{D} are called neighbors, denoted D\sim D', if they differ in the data of exactly one individual.

A (possibly randomized) algorithm \mathcal{M} : \mathcal{D} \to (\mathcal{Y},\mathcal{F}) is said to be
(\varepsilon,\delta)-differentially private if for all neighboring datasets D\sim D' and all measurable events
S \in \mathcal{F},
\Pr[\mathcal{M}(D)\in S] \;\le\; e^{\varepsilon}\Pr[\mathcal{M}(D')\in S] + \delta.

It has been shown in a few references/textbooks that basic composition holds for differential privacy. We recall the statement:

Theorem (Basic sequential composition for approximate differential privacy)
Fix k\in\mathbb{N}. For each i\in\{1,\ldots,k\} let \mathcal{M}_i be a (possibly randomized) algorithm that, on input a dataset D, outputs a random variable in some measurable output space (\mathcal{Y}_i,\mathcal{F}_i).
Assume that for every i, \mathcal{M}_i is (\varepsilon_i,\delta_i)-differentially private.

Define the k-round interactive (sequential) mechanism \mathcal{M} as follows: on input D, for i=1,\ldots,k, it outputs Y_i \leftarrow \mathcal{M}_i (D; Y_1,\ldots,Y_{i-1}),
where \mathcal{M}_i(\cdot; y_{<i}) denotes the ith mechanism possibly chosen adaptively as a (measurable) function of the past transcript y_{<i}=(y_1,\ldots,y_{i-1}).
Let Y=(Y_1,\ldots,Y_k) denote the full transcript in the product space
(\mathcal{Y},\mathcal{F}) := \prod_{i=1}^k (\mathcal{Y}_i,\mathcal{F}_i).

Then \mathcal{M} is \left(\sum_{i=1}^k \varepsilon_i,\ \sum_{i=1}^k \delta_i\right)-differentially private.

In particular, if \varepsilon_i=\varepsilon and \delta_i=\delta for all i, then \mathcal{M} is (k\varepsilon, k\delta)-differentially private.

What happens in the quantum setting?

Composition of Quantum Differential Privacy

A central “classical DP intuition” we have already set up is: once you have per-step privacy bounds, you can stack them, and in the simplest form the parameters add. e.g., (\varepsilon, \delta) adds across rounds. In the quantum world, however, DP is commonly defined operationally against arbitrary measurements; and this makes the usual classical composition proofs, which rely on a scalar privacy-loss random variable, no longer directly applicable.

In a recent work, Theshani Nuradha and I show two complementary points, one negative (a barrier) and one positive:

  1. Composition can fail in full generality for approximate QDP (POVM-based).
    We show that if you allow correlated joint implementations when combining mechanisms/channels, then “classical-style” composition need not hold: even channels that are “individually perfectly private” can lose privacy drastically when composed in this fully general way.
  2. Composition can be restored under explicit structural assumptions.
    Then we identify a regime where you can recover clean composition statements: tensor-product channels acting on product neighboring inputs. In that regime, we propose a quantum moments accountant built from an operator-valued notion of privacy loss and a matrix moment-generating function (MGF).
  3. How we get operational guarantees (despite a key obstacle).
    A subtlety we highlight: the Rényi-type divergence we consider for the moments accountant does not satisfy a data-processing inequality. Nevertheless, we prove that controlling appropriate moments is still enough to upper bound measured Rényi divergence, which does correspond to operational privacy against arbitrary measurements.
  4. End result: advanced-composition-style behavior (in the right setting).
    Under those structural assumptions, the paper obtains advanced-composition-style bounds with the same leading-order behavior as in classical DP. i.e., you can once again reason modularly about long pipelines, but only after carefully stating what “composition” means (i.e., joint, tensor-product, factorized) physically/operationally in the quantum setting.

Check out the paper. Feedback/comments are welcome!