Secure Non-Interactive Reductions

A recurring theme in cryptography is that correlation is a resource. Alice and Bob may start with a source correlation CC, say noisy copies of a bit or erasures. The hope is to convert it into a more useful target correlation DD. What changes dramatically is whether they are allowed to talk, and whether we only care about correctness or also about security. The recent SNIR/SNIS line of work makes that distinction precise and surprisingly rigid.

This post explains that landscape in a way I hope is useful. The main message/tl;dr is:

  • ordinary non-secure non-interactive reductions ask only whether local maps can reproduce the right joint law;
  • secure non-interactive reductions ask for that plus privacy against each party;
  • interactive reductions are much more powerful because communication can reshape the parties’ views before the final local-output step.

Recently, I have also been working with colleagues in ECE and Physics on quantum secure non-interactive reductions, which aim to understand when one quantum correlation resource can be transformed into another quantum (or classical) resource without communication and while preserving the appropriate notion of security against each party. In the quantum setting, the picture is even more delicate: the relevant resources may be shared quantum states, measurements may disturb systems, side information may be entangled, and the right privacy conditions must account for quantum adversaries and quantum views rather than just classical random variables. Our framework for thinking about that problem relies heavily on the formalization of the classical or non-quantum version, which is the subject of this post. Put differently, before asking what secure non-interactive reducibility should mean for shared entangled states or quantum correlations, it is essential to first understand the classical theory: what ordinary non-interactive reductions allow, what secure non-interactive reductions forbid, and why interaction fundamentally changes the reducibility landscape. I will also prove a simple lemma showing that secure non-interactive reducibility is strictly stronger than ordinary non-interactive reducibility.

The three models

Fix a source correlation C=(X,Y)C=(X,Y) and a target correlation D=(U,V)D=(U,V).

Non-secure non-interactive reduction (NIR / NIS)

Alice gets XnX^n, Bob gets YnY^n, they do no communication, and output U=f(Xn),V=g(Yn)U’ = f(X^n), V’ = g(Y^n), or randomized versions using local coins. The goal is for (U,V)(U, V) to be close in statistical distance (or total variation distance) to (U,V)(U’, V’).

That is the classical non-interactive simulation perspective: only correctness of the output law matters. The SNIS paper explicitly contrasts this with SNIS, noting that NIS “only considers correctness (not security).”

Secure non-interactive reduction (SNIR / SNIS)

Now Alice and Bob still do not communicate, but in addition to correctness, the reduction must hide whatever the target correlation is supposed to hide. In the formulation used in the SNIS paper, a reduction from (U,V)(U,V) to (X,Y)n(X,Y)^{\otimes n} must satisfy:

  1. Correctness: (U,V)(U’,V’) is close (in statistical distance) to (U,V)(U,V).
  2. Security against corrupt Alice: conditioned on the final outputs (u,v)(u,v), Alice’s view should be close to depending only on uu, not on the extra value vv.
  3. Security against corrupt Bob: symmetrically, Bob’s view should be close to depending only on vv, not on the extra value uu.

This is the key strengthening. In NIR, a party may silently carry “too much knowledge” about the other party’s output, as long as the final joint distribution looks right. In SNIR, that is disallowed.

Interactive secure reductions

Here Alice and Bob may exchange messages before producing outputs. An interactive secure reduction can be seen as having two phases. First, an interaction phase transforms the parties’ views. Second, a local derivation phase maps those final views to outputs, and the security condition applies to this derivation step.

That decomposition is conceptually important: SNIR isolates the “final local secure derivation” part while forbidding the interaction that might create the right intermediate views in the first place. This is why interactive reductions can be much stronger.

Why SNIR is the right cryptographic strengthening

NIR asks “can I simulate the right distribution?”
SNIR asks “can I simulate the right distribution without the wrong leakage?”

The SNIS paper states this very plainly: in NIS, “erasing information from parties’ views… is permissible,” but that may not be cryptographically secure. SNIS was introduced precisely to capture non-interactive secure conversion of one correlation into another. This matters because many correlations that are “equivalent enough” from a purely distributional viewpoint stop being interchangeable once one insists on simulation-based privacy.

SNIR is strictly stronger than non-secure NIR

Here is a simple lemma/example that captures the basic separation.

Lemma/example

There exist correlations CC and DD such that:

  1. DD has a perfect non-secure non-interactive reduction from CC, but
  2. DD has no perfect secure non-interactive reduction from CC.

Proof

Let S,TS,T be independent uniform bits.

Define the source correlationC=((S,T),T).C = ((S,T),\, T).

So Alice receives the pair (S,T)(S,T), while Bob receives only TT.

Define the target correlationD=(S,T).D = (S,T).

Step 1: There is a perfect non-secure non-interactive reduction

Alice outputsU:=S,U’ := S,

and Bob outputsV:=T.V’ := T.

No communication is used. Since S,TS,T are independent uniform bits, the joint distribution of (U,V)(U’,V’) is exactly (S,T)(S,T), which is exactly the target correlation DD.

So DD has a perfect NIR from CC.

Step 2: There is no perfect secure non-interactive reduction

Assume for contradiction that the above source-to-target conversion were a perfect SNIR.

In the target correlation D=(S,T)D=(S,T), Alice’s output is U=SU=S and Bob’s output is V=TV=T. Since SS and TT are independent, a correct secure realization of DD should not let Alice’s view reveal Bob’s output TT beyond what is implied by her own output SS.

But in the source CC, Alice’s view isX=(S,T).X=(S,T).

Conditioned on the target outputs (U,V)=(s,t)(U,V)=(s,t), Alice’s view is deterministicallyX=(s,t).X=(s,t).

Hence the conditional law of Alice’s view depends on tt . i.e., on Bob’s output VV, not merely on Alice’s own output U=sU=s.

This violates the security-against-Alice requirement for SNIR.

Therefore no perfect SNIR from CC to DD exists.

What this lemma implies

This toy example is simple, but it isolates the exact distinction.

  • In NIR, it is fine that Alice initially knows both SS and TT; all we asked for was the right final law.
  • In SNIR, that extra knowledge is fatal, because Alice learns Bob’s target output in a way the target correlation itself does not permit.

Ordinary non-interactive reduction

“Can I locally compress, discard, relabel, or threshold my view so that the joint output law matches the target?”

This is fundamentally an information-theoretic simulation question.

Secure non-interactive reduction

“Can I do that without retaining forbidden side information about the other party’s target output?”

This is a simulation-based-privacy question.

Interactive reduction

“Can I first reengineer the two views using communication, and only then derive the target securely?”

This is why interaction is stronger.


References

[1] Agarwal, Narayanan, Pathak, Prabhakaran, Prabhakaran, Rehan, “Secure Non-Interactive Reduction and Spectral Analysis of Correlations” (Eurocrypt 2022 / ePrint 2022/262), especially the spectral criterion, mirroring perspective, and incompleteness results.

[2] Bhushan, Misra, Narayanan, Prabhakaran, “Secure Non-Interactive Reducibility is Decidable” (TCC 2022 / ePrint 2022/1457), for decidability of SNIR feasibility.

[3] Khorasgani, Maji, Nguyen, “Secure Non-interactive Simulation: Feasibility and Rate” (Eurocrypt 2022), for the simulation-based definition, comparison with NIS and OWSC, and exact rate/feasibility characterizations for BSS/BES families.

(Scalable) Multiterminal Key Agreement

In cryptography, secret key agreement is usually framed as a two-party story: Alice and Bob want to agree on a shared secret while Eve listens. But many real systems are not two-party systems. They involve teams of devices, servers, users, or sensors that need to establish the same secret key, often in the presence of public discussion and potentially powerful adversaries. A recent paper of mine, with Benjamin Kim (lead student author) and Lav Varshney, asks a simple question:

Can ideas from coding theory and secret sharing give us a clean, scalable way to do multiterminal secret key agreement?

Our answer is yes! The paper develops a simple multiterminal key agreement scheme built from maximum distance separable (MDS) codes (specifically Reed-Solomon codes) and then analyzes its secrecy and rate through the lens of secret key capacity and multivariate mutual information (MMI). The key point is that the same threshold structure that makes Reed-Solomon codes powerful for secret sharing also makes them surprisingly natural for multiterminal key agreement.

The Big Picture

The paper sits at the intersection of three ideas.

First, there is secret sharing. In a threshold secret sharing scheme, a secret is split into n shares so that any k shares can reconstruct the secret, but any fewer than k reveal nothing about it. Shamir Secret Sharing schemes (via use of Reed-Solomon codes) are a well-studied class of techniques to realize this threshold property.

Second, there is secret key agreement (SKA). In the multiterminal SKA setting, multiple parties start with correlated private observations, publicly discuss information over an authenticated public channel, and aim to end with the same shared key while leaking essentially nothing useful to an eavesdropper. The information-theoretic performance limit is measured by the secret key capacity.

Third, there is coding theory. Error-correcting codes already encode redundancy in a structured way. MDS codes are optimal in a precise sense: they achieve the maximum possible distance for their block length and dimension. That is, they give the strongest threshold-reconstruction behavior for a given amount of redundancy.

What our paper does is connect these three threads in a particularly direct way: use an MDS codeword as the object being partially distributed across terminals, then exploit the threshold property to guarantee recoverability for the legitimate users and secrecy against the wiretapper.

Why this Connection is Interesting

At a high level, secret sharing and multiterminal key agreement share many goals/targets/techniques.

In secret sharing, you start with a secret and want to distribute it safely across many users.

In multiterminal key agreement, you start with distributed private information and want users to end up with a shared secret key.

These are not identical tasks, but they share the same core geometry: some subsets of information should be enough to reconstruct, and smaller subsets should reveal nothing. The paper’s main conceptual contribution is to push this analogy far enough to get an explicit multiterminal SKA protocol out of it. (The abstract states this clearly: the work explores the connection between secret sharing and secret key agreement, yielding “a simple and scalable multiterminal key agreement protocol” based on Reed-Solomon codes with threshold reconstruction.)

The threshold semantics of MDS codes are exactly the right structure for a scalable multiterminal key agreement design.

The Model

The paper works in the standard multiterminal communication setup. There is a set of terminals V, a subset A\subseteq V active users who must recover the final key, and possibly helpers in V\setminus A. Each user has access to a private correlated source Z_i​, and everyone can participate in public discussion, which an eavesdropper also sees. The key must satisfy two conditions:

  • Secrecy: the public discussion should reveal essentially no information about the key.
  • Recoverability: every active user should be able to reconstruct the key from their private source and the public discussion.

The paper also recalls the standard definition of secret key capacity, the maximum achievable key rate over all allowed protocols in this model.

The Coding Idea

Now for the protocol idea.

Suppose we take a secret and pad it with additional uniform randomness, forming a vector K=(s, u_1, \ldots, u_{k-1}),

where s is the actual secret and the u_i‘s are random pads. Then we encode K using an (n, k) Reed-Solomon code with generator matrix G, obtaining a codeword Z=KG. Each terminal receives one symbol z_i​ of this codeword, and then the protocol publicly reveals exactly k-1 additional symbols. Since any k symbols of an (n,k) MDS code suffice to reconstruct the message, each legitimate terminal can combine its private symbol with the k-1 public symbols and recover the full encoded message K.

This is the reconstruction side.

On the secrecy side, the protocol masks the privately delivered shares using independent random values r_i​. In the “secret-sharing-inspired” version of the scheme, terminal Z_1​ has a distinct random shared value r_i​ with each terminal Z_i​, and broadcasts e_i = z_i + r_i.. Each intended recipient can subtract its own mask r_i​ and recover the corresponding code symbol, but the eavesdropper only sees the masked version.

So the core protocol can be summarized as follows:

  1. Encode the secret-plus-padding with an MDS code,
  2. Privately give each user one share (masked),
  3. Publicly reveal exactly k additional shares,
  4. Let each legitimate user reconstruct,
  5. Rely on the threshold property to ensure that the eavesdropper still stays below the reconstruction threshold.

The Main Security Statements

In the paper, we show that the eavesdropper learns nothing about the secret from the entire visible transcript.

The first main theorem captures the information-theoretic secrecy of this setup. There is also a computational variant. If one uses an IND-CPA-secure public-key encryption scheme to distribute the shares to terminals, then the protocol becomes computationally secure against PPT adversaries rather than information-theoretically secure against unbounded ones. That computational extension is practically important because it shows the scheme is not tied to idealized pre-shared masks. If you are willing to step down from unconditional secrecy to computational secrecy, you can distribute shares using standard encrypted channels.

If I had to summarize the paper in one sentence, I would say this:

It shows that MDS codes are not just a convenient implementation tool for multiterminal key agreement; they are a structural bridge between secret sharing, error correction, and SKA capacity.

For further details, take a look at the paper. Feedback/comments are welcome!

Composition of Privacy Guarantees: Classical and Quantum

I try to always consider the classical alternative to any quantum computation or quantum information-theoretic primitive. This is a deliberate choice. I am not a pure quantum theorist in the sense of studying quantum models in isolation, nor am I interested in quantum advantage as an article of faith. Rather, my goal is to delineate (as precisely as possible) the boundary between what classical and quantum theories can guarantee, especially when privacy guarantees are composed over time, across mechanisms, or between interacting systems.

In the context of privacy, composition is where theory meets reality: real systems are never single-shot. They involve repeated interactions, adaptive adversaries, and layered mechanisms. Quantum information introduces new phenomena (entanglement, non-commutativity, and measurement disturbance) that complicate classical intuitions about composition. At the same time, classical privacy theory has developed remarkably robust tools that often remain surprisingly competitive, even when quantum resources are allowed.

The guiding question of this post is therefore not “What can quantum systems do that classical ones cannot?” but rather:

When privacy guarantees are composed, what genuinely changes in the transition from classical to quantum. And what does not?

By keeping classical alternatives explicitly in view, we can better understand which privacy phenomena are inherently quantum, which are artifacts of modeling choices, and which reflect deeper structural principles that transcend the classical vs. quantum divide.

Classical Composition of Differential Privacy

Recall the definition of differential privacy:

Approximate Differential Privacy
Let \mathcal{X} denote the data universe and let \mathcal{D} \subseteq \mathcal{X}^n be the set of datasets.
Two datasets D,D'\in\mathcal{D} are called neighbors, denoted D\sim D', if they differ in the data of exactly one individual.

A (possibly randomized) algorithm \mathcal{M} : \mathcal{D} \to (\mathcal{Y},\mathcal{F}) is said to be
(\varepsilon,\delta)-differentially private if for all neighboring datasets D\sim D' and all measurable events
S \in \mathcal{F},
\Pr[\mathcal{M}(D)\in S] \;\le\; e^{\varepsilon}\Pr[\mathcal{M}(D')\in S] + \delta.

It has been shown in a few references/textbooks that basic composition holds for differential privacy. We recall the statement:

Theorem (Basic sequential composition for approximate differential privacy)
Fix k\in\mathbb{N}. For each i\in\{1,\ldots,k\} let \mathcal{M}_i be a (possibly randomized) algorithm that, on input a dataset D, outputs a random variable in some measurable output space (\mathcal{Y}_i,\mathcal{F}_i).
Assume that for every i, \mathcal{M}_i is (\varepsilon_i,\delta_i)-differentially private.

Define the k-round interactive (sequential) mechanism \mathcal{M} as follows: on input D, for i=1,\ldots,k, it outputs Y_i \leftarrow \mathcal{M}_i (D; Y_1,\ldots,Y_{i-1}),
where \mathcal{M}_i(\cdot; y_{<i}) denotes the ith mechanism possibly chosen adaptively as a (measurable) function of the past transcript y_{<i}=(y_1,\ldots,y_{i-1}).
Let Y=(Y_1,\ldots,Y_k) denote the full transcript in the product space
(\mathcal{Y},\mathcal{F}) := \prod_{i=1}^k (\mathcal{Y}_i,\mathcal{F}_i).

Then \mathcal{M} is \left(\sum_{i=1}^k \varepsilon_i,\ \sum_{i=1}^k \delta_i\right)-differentially private.

In particular, if \varepsilon_i=\varepsilon and \delta_i=\delta for all i, then \mathcal{M} is (k\varepsilon, k\delta)-differentially private.

What happens in the quantum setting?

Composition of Quantum Differential Privacy

A central “classical DP intuition” we have already set up is: once you have per-step privacy bounds, you can stack them, and in the simplest form the parameters add. e.g., (\varepsilon, \delta) adds across rounds. In the quantum world, however, DP is commonly defined operationally against arbitrary measurements; and this makes the usual classical composition proofs, which rely on a scalar privacy-loss random variable, no longer directly applicable.

In a recent work, Theshani Nuradha and I show two complementary points, one negative (a barrier) and one positive:

  1. Composition can fail in full generality for approximate QDP (POVM-based).
    We show that if you allow correlated joint implementations when combining mechanisms/channels, then “classical-style” composition need not hold: even channels that are “individually perfectly private” can lose privacy drastically when composed in this fully general way.
  2. Composition can be restored under explicit structural assumptions.
    Then we identify a regime where you can recover clean composition statements: tensor-product channels acting on product neighboring inputs. In that regime, we propose a quantum moments accountant built from an operator-valued notion of privacy loss and a matrix moment-generating function (MGF).
  3. How we get operational guarantees (despite a key obstacle).
    A subtlety we highlight: the Rényi-type divergence we consider for the moments accountant does not satisfy a data-processing inequality. Nevertheless, we prove that controlling appropriate moments is still enough to upper bound measured Rényi divergence, which does correspond to operational privacy against arbitrary measurements.
  4. End result: advanced-composition-style behavior (in the right setting).
    Under those structural assumptions, the paper obtains advanced-composition-style bounds with the same leading-order behavior as in classical DP. i.e., you can once again reason modularly about long pipelines, but only after carefully stating what “composition” means (i.e., joint, tensor-product, factorized) physically/operationally in the quantum setting.

Check out the paper. Feedback/comments are welcome!