Composition of Privacy Guarantees: Classical and Quantum

I try to always consider the classical alternative to any quantum computation or quantum information-theoretic primitive. This is a deliberate choice. I am not a pure quantum theorist in the sense of studying quantum models in isolation, nor am I interested in quantum advantage as an article of faith. Rather, my goal is to delineate (as precisely as possible) the boundary between what classical and quantum theories can guarantee, especially when privacy guarantees are composed over time, across mechanisms, or between interacting systems.

In the context of privacy, composition is where theory meets reality: real systems are never single-shot. They involve repeated interactions, adaptive adversaries, and layered mechanisms. Quantum information introduces new phenomena (entanglement, non-commutativity, and measurement disturbance) that complicate classical intuitions about composition. At the same time, classical privacy theory has developed remarkably robust tools that often remain surprisingly competitive, even when quantum resources are allowed.

The guiding question of this post is therefore not “What can quantum systems do that classical ones cannot?” but rather:

When privacy guarantees are composed, what genuinely changes in the transition from classical to quantum. And what does not?

By keeping classical alternatives explicitly in view, we can better understand which privacy phenomena are inherently quantum, which are artifacts of modeling choices, and which reflect deeper structural principles that transcend the classical vs. quantum divide.

Classical Composition of Differential Privacy

Recall the definition of differential privacy:

Approximate Differential Privacy
Let \mathcal{X} denote the data universe and let \mathcal{D} \subseteq \mathcal{X}^n be the set of datasets.
Two datasets D,D'\in\mathcal{D} are called neighbors, denoted D\sim D', if they differ in the data of exactly one individual.

A (possibly randomized) algorithm \mathcal{M} : \mathcal{D} \to (\mathcal{Y},\mathcal{F}) is said to be
(\varepsilon,\delta)-differentially private if for all neighboring datasets D\sim D' and all measurable events
S \in \mathcal{F},
\Pr[\mathcal{M}(D)\in S] \;\le\; e^{\varepsilon}\Pr[\mathcal{M}(D')\in S] + \delta.

It has been shown in a few references/textbooks that basic composition holds for differential privacy. We recall the statement:

Theorem (Basic sequential composition for approximate differential privacy)
Fix k\in\mathbb{N}. For each i\in\{1,\ldots,k\} let \mathcal{M}_i be a (possibly randomized) algorithm that, on input a dataset D, outputs a random variable in some measurable output space (\mathcal{Y}_i,\mathcal{F}_i).
Assume that for every i, \mathcal{M}_i is (\varepsilon_i,\delta_i)-differentially private.

Define the k-round interactive (sequential) mechanism \mathcal{M} as follows: on input D, for i=1,\ldots,k, it outputs Y_i \leftarrow \mathcal{M}_i (D; Y_1,\ldots,Y_{i-1}),
where \mathcal{M}_i(\cdot; y_{<i}) denotes the ith mechanism possibly chosen adaptively as a (measurable) function of the past transcript y_{<i}=(y_1,\ldots,y_{i-1}).
Let Y=(Y_1,\ldots,Y_k) denote the full transcript in the product space
(\mathcal{Y},\mathcal{F}) := \prod_{i=1}^k (\mathcal{Y}_i,\mathcal{F}_i).

Then \mathcal{M} is \left(\sum_{i=1}^k \varepsilon_i,\ \sum_{i=1}^k \delta_i\right)-differentially private.

In particular, if \varepsilon_i=\varepsilon and \delta_i=\delta for all i, then \mathcal{M} is (k\varepsilon, k\delta)-differentially private.

What happens in the quantum setting?

Composition of Quantum Differential Privacy

A central “classical DP intuition” we have already set up is: once you have per-step privacy bounds, you can stack them, and in the simplest form the parameters add. e.g., (\varepsilon, \delta) adds across rounds. In the quantum world, however, DP is commonly defined operationally against arbitrary measurements; and this makes the usual classical composition proofs, which rely on a scalar privacy-loss random variable, no longer directly applicable.

In a recent work, Theshani Nuradha and I show two complementary points, one negative (a barrier) and one positive:

  1. Composition can fail in full generality for approximate QDP (POVM-based).
    We show that if you allow correlated joint implementations when combining mechanisms/channels, then “classical-style” composition need not hold: even channels that are “individually perfectly private” can lose privacy drastically when composed in this fully general way.
  2. Composition can be restored under explicit structural assumptions.
    Then we identify a regime where you can recover clean composition statements: tensor-product channels acting on product neighboring inputs. In that regime, we propose a quantum moments accountant built from an operator-valued notion of privacy loss and a matrix moment-generating function (MGF).
  3. How we get operational guarantees (despite a key obstacle).
    A subtlety we highlight: the Rényi-type divergence we consider for the moments accountant does not satisfy a data-processing inequality. Nevertheless, we prove that controlling appropriate moments is still enough to upper bound measured Rényi divergence, which does correspond to operational privacy against arbitrary measurements.
  4. End result: advanced-composition-style behavior (in the right setting).
    Under those structural assumptions, the paper obtains advanced-composition-style bounds with the same leading-order behavior as in classical DP. i.e., you can once again reason modularly about long pipelines, but only after carefully stating what “composition” means (i.e., joint, tensor-product, factorized) physically/operationally in the quantum setting.

Check out the paper. Feedback/comments are welcome!

The Usefulness of “Useless” Knowledge (and Why AI Makes Flexner Even More Right)

I just finished reading The Usefulness of Useless Knowledge again, this time with the perspective of living through a period of rapid technological acceleration driven by AI. On an earlier reading, Flexner’s defense of curiosity-driven inquiry felt aspirational and almost moral in tone, a principled argument for intellectual freedom. On rereading, it feels more diagnostic. Many of the tensions he identified (i.e., between short-term utility and long-term understanding, between institutional incentives and genuine discovery) now play out daily in how we fund, evaluate, and deploy AI research. What has changed is not the structure of his argument, but its urgency: in a world increasingly optimized for immediate outputs, Flexner’s insistence that transformative advances often arise from questions with no obvious application reads less like an idealistic manifesto and more like a practical warning.

In 1939, on the eve of a world war, Abraham Flexner published a slim, stubbornly optimistic essay with a mischievous title: The Usefulness of Useless Knowledge. His claim is not that practical work is bad. It’s that the deep engine of civilization is often curiosity that doesn’t start with an application in mind, and that trying to force every idea to justify itself immediately is a reliable way to stop the next revolution before it begins.

Robbert Dijkgraaf’s companion essay (and related pieces written from his vantage point at the Institute for Advanced Study) updates Flexner’s argument for a world that is now built out of microelectronics, networks, and software; this is exactly the substrate on which modern AI sits. Reading them together today feels like watching two people describe the same phenomenon across two eras: breakthroughs are usually the delayed interest on “useless” questions.

Below is a guided tour of their core ideas, with a detour through the current AI moment, where “useless” knowledge is quietly doing most of the work.


Flexner’s central paradox: curiosity first, usefulness later

Flexner’s essay is a defense of a particular kind of intellectual freedom: the right to pursue questions without writing an ROI memo first.

Dijkgraaf highlights one of Flexner’s most quoted lines (and the one that best captures the whole stance): “Curiosity… is probably the outstanding characteristic of modern thinking… and it must be absolutely unhampered.”

That “must” is doing a lot of work. Flexner isn’t saying that applications are optional. He’s saying the route to them is often non-linear and hard to predict. He even makes the institutional point: a research institute shouldn’t justify itself by promising inventions on a timeline. Instead: “We make ourselves no promises… [but] cherish the hope that the unobstructed pursuit of useless knowledge” will matter later.

Notice the subtlety: he hopes it will matter, but he refuses to make that the official rationale. Why? Because if you only fund what looks useful today, you’ll underproduce the ideas that define tomorrow.


The “Mississippi” model of discovery (and why it matters for AI)

Flexner is unusually modern in how he describes the innovation pipeline: not as single geniuses striking gold, but as a long chain of partial insights that only later “click.”

He writes: “Almost every discovery has a long and precarious history… Science… begins in a tiny rivulet… [and] is formed from countless sources.”

This is basically an antidote to the myth that research can be managed like a factory. You can optimize a pipeline once you know what the pipeline is. But when you’re still discovering what questions are even coherent, “efficiency” often means “premature narrowing.”

AI is a perfect example of the Mississippi model. Modern machine learning is not one idea; it’s a confluence:

  • mathematical statistics + linear algebra,
  • optimization + numerical computing,
  • information theory + coding,
  • neuroscience metaphors + cognitive science,
  • hardware advances + systems engineering,
  • and now massive-scale data and infrastructure.

Much of that was, at some point, “not obviously useful” until it suddenly was.


Flexner’s warning: the real enemy is forced conformity

Flexner’s defense of “useless knowledge” is not only about technology; it’s about human freedom. He’s writing in a period where universities were being pushed into ideological service, and he argues that the gravest threat is not wrong ideas, but the attempt to prevent minds from ranging freely.

One of his sharpest lines: “The real enemy… is the man who tries to mold the human spirit so that it will not dare to spread its wings.”

If you read that in 2025, it lands uncomfortably close to modern pressures on research:

  • “Only fund what’s immediately commercial.”
  • “Only publish what’s trendy.”
  • “Only study what aligns with the current institutional incentive gradient.”
  • “Only build what can be shipped next quarter.”

And in AI specifically:

  • “Only do work that scales.”
  • “Only do benchmarks.”
  • “Only do applied product wins.”

Flexner isn’t anti-application; he’s anti-premature closure.


Dijkgraaf’s update: society runs on knowledge it can’t fully see anymore

Dijkgraaf’s companion essay takes Flexner’s stance and says, essentially: look around, Flexner won. The modern world is built out of the long tail of basic research.

He gives a crisp late-20th-century example: the World Wide Web began as a collaboration tool for particle physicists at CERN (introduced in 1989, made public in 1993). He ties that to the evolution of grid and cloud computing developed to handle scientific data, technology that now undergirds everyday internet services. Then he makes a claim that matters a lot for AI policy debates: fundamental advances are public goods (i.e., they diffuse beyond any single lab or nation).That’s an especially relevant lens for AI, where:

  • open ideas (architectures, optimization tricks, safety methods) propagate fast,
  • but compute, data, and deployment concentrate power.

If knowledge is a public good, then a society that starves basic research is quietly selling off its future, even if it still “uses” plenty of science in the present.


AI as a case study in “useful uselessness”

Here’s a helpful way to read Flexner in the age of AI:

A) “Useless” questions that became AI infrastructure

Many of the questions that shaped AI looked abstract or niche before they became inevitable:

  • How do high-dimensional models generalize?
  • When does overparameterization help rather than hurt?
  • What is the geometry of optimization landscapes?
  • How can representation learning capture structure without labels?
  • What are the limits of compression, prediction, and inference?

These don’t sound like product requirements. They sound like “useless” theory, until you realize they govern whether your model trains at all, whether it’s robust, whether it leaks private data, whether it can be aligned, and whether it fails safely.

Flexner’s point isn’t that every abstract question pays off. It’s that you can’t pre-identify the ones that will, and trying to do so narrows the search too early.

B) “Tool-making” is often the hidden payoff

Dijkgraaf emphasizes that pathbreaking research yields tools and techniques in indirect ways. (ias.edu)
AI progress has been exactly this: tool-making (optimizers, architectures, pretraining recipes, eval frameworks, interpretability methods, privacy-preserving techniques) that later becomes the platform everyone builds on.

C) The scary twist: usefulness for good and bad

Flexner also notes that discoveries can become instruments of destruction when repurposed. He uses chemical and aviation examples to make the point.

AI has the same dual-use character:

  • The same generative model family can draft medical summaries or automate phishing.
  • The same computer vision advances can improve accessibility or expand surveillance.
  • The same inference tools can find scientific patterns or extract sensitive attributes.

Flexner’s framework doesn’t solve dual-use, but it forces honesty: the ethical challenge isn’t a reason to stop curiosity; it’s a reason to pair curiosity with governance, norms, and safeguards.


A Flexnerian reading of the current AI funding wave

We’re currently living through a paradox that Flexner would recognize instantly:

  1. AI is showered with investment because it’s visibly useful now.
  2. That investment creates pressure to define “research” as whatever improves next quarter’s metrics.
  3. But the next conceptual leap in AI may come from areas that look “useless” relative to today’s dominant paradigm.

If you want better long-horizon AI outcomes (i.e., robustness, interpretability, privacy, security, alignment, and scientific discovery) Flexner would argue you need institutions that protect inquiry that isn’t instantly legible as profitable.

Or in his words, you need “spiritual and intellectual freedom.”


What to do with this (three practical takeaways)

1) Keep a portfolio: fast product work + slow foundational work

Treat research like an ecosystem. If everything must justify itself immediately, you get brittle progress. Flexner’s “no promises” stance is a feature, not a bug.

2) Reward questions, not only answers

Benchmarks matter, but they can also overfit the field’s imagination. Some of the most important AI work right now is about re-framing the question (e.g., what counts as “understanding,” what counts as “alignment,” what counts as “privacy,” what counts as “truthfulness”).

3) Build institutions that protect intellectual risk

Flexner designed the Institute for Advanced Study around the idea that scholars “accomplish most when enabled” to pursue deep work with minimal distraction.
AI needs its own versions of that: spaces where the incentive is insight, not velocity.


AI is not an argument against Flexner (it’s his exhibit A)

If you hold a smartphone, use a search engine, or interact with modern AI systems, you’re touching the compounded returns of yesterday’s “useless” knowledge.

Flexner’s defense isn’t sentimental. It’s strategic: a society that wants transformative technology must also want the conditions that produce it: freedom, patience, and room for ideas that don’t yet know what they’re for. Or, as Dijkgraaf puts it in summarizing Flexner’s view: fundamental inquiry goes to the “headwaters,” and applications follow, slowly, steadily, and often surprisingly.


Main Source: https://www.ias.edu/ideas/2017/dijkgraaf-usefulness

From Postdoc Notes to a Full Textbook

During the 2024–2025 academic year, I decided to start writing detailed lecture notes on Topics in Information-Theoretic Cryptography (https://dacesresearch.org/infocrypto/). At the time, I was still thinking about research (e.g., in differential privacy, zero-knowledge, and information-theoretic security more broadly) while also preparing to transition into my faculty role at UIUC.

During that period, I started drafting early versions of the lecture notes that would eventually form the backbone of my Fall 2025 graduate course at UIUC. These weren’t intended to be a book (at least, not at first). They were simply my attempt to consolidate ideas I was using in my research (from fingerprinting lower bounds to statistical zero-knowledge to watermarking generative models) into a cohesive pedagogical narrative.

I experimented heavily with new ways to explain familiar concepts. I rewrote some proofs repeatedly. I paired classical topics (e.g., the One-Time Pad and Shannon entropy) with modern concerns such as data-market privacy risks, statistical attacks on machine learning models, and quantum-era cryptographic threats.

By the time I arrived at UIUC in Fall 2025, the notes had already grown into something far larger than a lecture packet. Teaching the course from these notes, and expanding them week after week, revealed that they could become more than supplementary material. Maybe these notes could become a full textbook?

This blog post is a reflection on that journey: how the material grew, what the book covers, and the many people and institutions who made it possible.

Reflections on the Process

1. Writing revealed connections I hadn’t noticed before.

Integrating ZK, DP, MPC, and quantum topics forced me to articulate the conceptual threads uniting them.

2. Student questions shaped the clarity of the exposition.

When multiple students struggled with the same definition, I rewrote it. Many of those improved explanations are now part of the book.

3. Compiling a textbook is a creative research act.

Several new lemmas, interpretations, and frameworks arose during the writing (simply from trying to explain concepts more cleanly).

Book Chapters

Compiling the textbook required reorganizing an entire semester’s worth of evolving lecture notes into a coherent structure that, I hope, could guide a reader from basics of probability to the frontiers of modern security. Below is a thematic overview of how the chapters came together.

1. Foundations

The book opens with a modern introduction to cryptography, revisiting the motivations, core goals, and roles of secrecy, randomness, and adversaries. It then transitions through a detailed review of probability (i.e., expectation, independence, conditional distributions) and into essential tools from information theory.

I believe this foundation anchors the rest of the text and supports the many advanced topics that follow.

2. Attacks That Motivate the Theory

A distinctive early feature of the book is its chapter on attacks, including:

  • reconstruction attacks
  • chosen-plaintext and side-channel attacks
  • valuation attacks in data markets

These examples provide students with an intuitive understanding of what must be defended and why theory matters.

3. Differential Privacy: From Basics to RDP and Hypothesis Testing

DP occupies several chapters, covering:

  • Laplace and Gaussian mechanisms
  • composition theorems
  • Rényi DP
  • DP-SGD
  • framing DP through the lens of hypothesis testing

This was one of the most extensive parts of the rewriting process, as I attempted to unify multiple strands of the privacy literature into one narrative.

4. Lower Bounds in Differential Privacy

Another major contribution of the book is its treatment of lower bounds:

  • packing arguments
  • fingerprinting codes
  • mutual-information-based bounds
  • connections to group privacy

These tools help readers understand the inherent limitations of privacy guarantees.

5. Statistical Estimation, Testing, and Machine Learning Under DP

Later chapters connect DP mechanisms to classical statistical tasks:

  • mean/variance estimation
  • linear regression
  • hypothesis testing
  • utility tradeoffs

Each topic demonstrates how information-theoretic reasoning guides algorithm design.

6. Privacy in Distributed Systems: LDP, Shuffling, MPC, FL

This chapter weaves together local differential privacy and secure multiparty computation—two topics rarely unified in a single textbook:

  • randomized response and k-ary LDP
  • shuffle model and ESA
  • MPC definitions and protocols
  • secure summation
  • federated learning with DP

7–10. Zero-Knowledge Proofs and Information-Theoretic Proof Systems

These chapters form a complete narrative arc:

  • classical ZK protocols (3-coloring, GI)
  • statistical zero-knowledge and SZK-complete problems
  • multi-verifier SZK
  • ZK over secret-shared data
  • linear PCPs and IOPs
  • polynomial commitments and inner-product arguments

11. Multi-Party Differential Privacy

A modern and emerging topic, combining cryptographic and information-theoretic privacy:

  • adversary models
  • distributed noise-addition protocols
  • MPC-based DP
  • simulation and composition theorems

This chapter, in my opinion, is one of the most forward-looking in the book. (I have some active research projects in this space.)

12. Quantum Cryptography

A full chapter on quantum mechanics and its cryptographic implications, featuring:

  • the photon-polaroid experiment
  • superposition, entanglement, and measurement
  • Shor’s algorithm
  • QKD (BB84)
  • pure vs. mixed states

This chapter offers both intuitive and formal perspectives.

13. Watermarking, Steganography, and AI Content

The final chapter bridges classical information hiding with generative AI:

  • perceptual models and robustness
  • spread-spectrum and QIM watermarking
  • deep-learning-based steganography
  • watermarking of large generative models
  • powered randomness used for sampling

This connects the field’s classical roots to current and future security challenges.

Acknowledgements

I developed the bulk of the course materials for the accompanying course during my postdoc, while supported by a Simons Junior Fellowship from the Simons Foundation (965342, D.A.). I am deeply grateful for this support; it gave me the intellectual space to design the course, think deeply about its structure, and begin drafting what would become this book.

This book would not have been possible without the support of my colleagues at UIUC, especially in the Department of Electrical and Computer Engineering. Many colleagues provided helpful feedback while I was developing the materials, attended some class sessions where I tested parts of the exposition, or offered valuable insights on how to structure complex topics such as zero-knowledge proofs, differential privacy, and information-theoretic analyses. Their encouragement and technical discussions greatly shaped the final form of the text.

I will, most likely, upgrade the textbook everytime I teach a subset of the topics covered!