(Scalable) Multiterminal Key Agreement

In cryptography, secret key agreement is usually framed as a two-party story: Alice and Bob want to agree on a shared secret while Eve listens. But many real systems are not two-party systems. They involve teams of devices, servers, users, or sensors that need to establish the same secret key, often in the presence of public discussion and potentially powerful adversaries. A recent paper of mine, with Benjamin Kim (lead student author) and Lav Varshney, asks a simple question:

Can ideas from coding theory and secret sharing give us a clean, scalable way to do multiterminal secret key agreement?

Our answer is yes! The paper develops a simple multiterminal key agreement scheme built from maximum distance separable (MDS) codes (specifically Reed-Solomon codes) and then analyzes its secrecy and rate through the lens of secret key capacity and multivariate mutual information (MMI). The key point is that the same threshold structure that makes Reed-Solomon codes powerful for secret sharing also makes them surprisingly natural for multiterminal key agreement.

The Big Picture

The paper sits at the intersection of three ideas.

First, there is secret sharing. In a threshold secret sharing scheme, a secret is split into n shares so that any k shares can reconstruct the secret, but any fewer than k reveal nothing about it. Shamir Secret Sharing schemes (via use of Reed-Solomon codes) are a well-studied class of techniques to realize this threshold property.

Second, there is secret key agreement (SKA). In the multiterminal SKA setting, multiple parties start with correlated private observations, publicly discuss information over an authenticated public channel, and aim to end with the same shared key while leaking essentially nothing useful to an eavesdropper. The information-theoretic performance limit is measured by the secret key capacity.

Third, there is coding theory. Error-correcting codes already encode redundancy in a structured way. MDS codes are optimal in a precise sense: they achieve the maximum possible distance for their block length and dimension. That is, they give the strongest threshold-reconstruction behavior for a given amount of redundancy.

What our paper does is connect these three threads in a particularly direct way: use an MDS codeword as the object being partially distributed across terminals, then exploit the threshold property to guarantee recoverability for the legitimate users and secrecy against the wiretapper.

Why this Connection is Interesting

At a high level, secret sharing and multiterminal key agreement share many goals/targets/techniques.

In secret sharing, you start with a secret and want to distribute it safely across many users.

In multiterminal key agreement, you start with distributed private information and want users to end up with a shared secret key.

These are not identical tasks, but they share the same core geometry: some subsets of information should be enough to reconstruct, and smaller subsets should reveal nothing. The paper’s main conceptual contribution is to push this analogy far enough to get an explicit multiterminal SKA protocol out of it. (The abstract states this clearly: the work explores the connection between secret sharing and secret key agreement, yielding “a simple and scalable multiterminal key agreement protocol” based on Reed-Solomon codes with threshold reconstruction.)

The threshold semantics of MDS codes are exactly the right structure for a scalable multiterminal key agreement design.

The Model

The paper works in the standard multiterminal communication setup. There is a set of terminals V, a subset A\subseteq V active users who must recover the final key, and possibly helpers in V\setminus A. Each user has access to a private correlated source Z_i​, and everyone can participate in public discussion, which an eavesdropper also sees. The key must satisfy two conditions:

  • Secrecy: the public discussion should reveal essentially no information about the key.
  • Recoverability: every active user should be able to reconstruct the key from their private source and the public discussion.

The paper also recalls the standard definition of secret key capacity, the maximum achievable key rate over all allowed protocols in this model.

The Coding Idea

Now for the protocol idea.

Suppose we take a secret and pad it with additional uniform randomness, forming a vector K=(s, u_1, \ldots, u_{k-1}),

where s is the actual secret and the u_i‘s are random pads. Then we encode K using an (n, k) Reed-Solomon code with generator matrix G, obtaining a codeword Z=KG. Each terminal receives one symbol z_i​ of this codeword, and then the protocol publicly reveals exactly k-1 additional symbols. Since any k symbols of an (n,k) MDS code suffice to reconstruct the message, each legitimate terminal can combine its private symbol with the k-1 public symbols and recover the full encoded message K.

This is the reconstruction side.

On the secrecy side, the protocol masks the privately delivered shares using independent random values r_i​. In the “secret-sharing-inspired” version of the scheme, terminal Z_1​ has a distinct random shared value r_i​ with each terminal Z_i​, and broadcasts e_i = z_i + r_i.. Each intended recipient can subtract its own mask r_i​ and recover the corresponding code symbol, but the eavesdropper only sees the masked version.

So the core protocol can be summarized as follows:

  1. Encode the secret-plus-padding with an MDS code,
  2. Privately give each user one share (masked),
  3. Publicly reveal exactly k additional shares,
  4. Let each legitimate user reconstruct,
  5. Rely on the threshold property to ensure that the eavesdropper still stays below the reconstruction threshold.

The Main Security Statements

In the paper, we show that the eavesdropper learns nothing about the secret from the entire visible transcript.

The first main theorem captures the information-theoretic secrecy of this setup. There is also a computational variant. If one uses an IND-CPA-secure public-key encryption scheme to distribute the shares to terminals, then the protocol becomes computationally secure against PPT adversaries rather than information-theoretically secure against unbounded ones. That computational extension is practically important because it shows the scheme is not tied to idealized pre-shared masks. If you are willing to step down from unconditional secrecy to computational secrecy, you can distribute shares using standard encrypted channels.

If I had to summarize the paper in one sentence, I would say this:

It shows that MDS codes are not just a convenient implementation tool for multiterminal key agreement; they are a structural bridge between secret sharing, error correction, and SKA capacity.

For further details, take a look at the paper. Feedback/comments are welcome!

The Usefulness of “Useless” Knowledge (and Why AI Makes Flexner Even More Right)

I just finished reading The Usefulness of Useless Knowledge again, this time with the perspective of living through a period of rapid technological acceleration driven by AI. On an earlier reading, Flexner’s defense of curiosity-driven inquiry felt aspirational and almost moral in tone, a principled argument for intellectual freedom. On rereading, it feels more diagnostic. Many of the tensions he identified (i.e., between short-term utility and long-term understanding, between institutional incentives and genuine discovery) now play out daily in how we fund, evaluate, and deploy AI research. What has changed is not the structure of his argument, but its urgency: in a world increasingly optimized for immediate outputs, Flexner’s insistence that transformative advances often arise from questions with no obvious application reads less like an idealistic manifesto and more like a practical warning.

In 1939, on the eve of a world war, Abraham Flexner published a slim, stubbornly optimistic essay with a mischievous title: The Usefulness of Useless Knowledge. His claim is not that practical work is bad. It’s that the deep engine of civilization is often curiosity that doesn’t start with an application in mind, and that trying to force every idea to justify itself immediately is a reliable way to stop the next revolution before it begins.

Robbert Dijkgraaf’s companion essay (and related pieces written from his vantage point at the Institute for Advanced Study) updates Flexner’s argument for a world that is now built out of microelectronics, networks, and software; this is exactly the substrate on which modern AI sits. Reading them together today feels like watching two people describe the same phenomenon across two eras: breakthroughs are usually the delayed interest on “useless” questions.

Below is a guided tour of their core ideas, with a detour through the current AI moment, where “useless” knowledge is quietly doing most of the work.


Flexner’s central paradox: curiosity first, usefulness later

Flexner’s essay is a defense of a particular kind of intellectual freedom: the right to pursue questions without writing an ROI memo first.

Dijkgraaf highlights one of Flexner’s most quoted lines (and the one that best captures the whole stance): “Curiosity… is probably the outstanding characteristic of modern thinking… and it must be absolutely unhampered.”

That “must” is doing a lot of work. Flexner isn’t saying that applications are optional. He’s saying the route to them is often non-linear and hard to predict. He even makes the institutional point: a research institute shouldn’t justify itself by promising inventions on a timeline. Instead: “We make ourselves no promises… [but] cherish the hope that the unobstructed pursuit of useless knowledge” will matter later.

Notice the subtlety: he hopes it will matter, but he refuses to make that the official rationale. Why? Because if you only fund what looks useful today, you’ll underproduce the ideas that define tomorrow.


The “Mississippi” model of discovery (and why it matters for AI)

Flexner is unusually modern in how he describes the innovation pipeline: not as single geniuses striking gold, but as a long chain of partial insights that only later “click.”

He writes: “Almost every discovery has a long and precarious history… Science… begins in a tiny rivulet… [and] is formed from countless sources.”

This is basically an antidote to the myth that research can be managed like a factory. You can optimize a pipeline once you know what the pipeline is. But when you’re still discovering what questions are even coherent, “efficiency” often means “premature narrowing.”

AI is a perfect example of the Mississippi model. Modern machine learning is not one idea; it’s a confluence:

  • mathematical statistics + linear algebra,
  • optimization + numerical computing,
  • information theory + coding,
  • neuroscience metaphors + cognitive science,
  • hardware advances + systems engineering,
  • and now massive-scale data and infrastructure.

Much of that was, at some point, “not obviously useful” until it suddenly was.


Flexner’s warning: the real enemy is forced conformity

Flexner’s defense of “useless knowledge” is not only about technology; it’s about human freedom. He’s writing in a period where universities were being pushed into ideological service, and he argues that the gravest threat is not wrong ideas, but the attempt to prevent minds from ranging freely.

One of his sharpest lines: “The real enemy… is the man who tries to mold the human spirit so that it will not dare to spread its wings.”

If you read that in 2025, it lands uncomfortably close to modern pressures on research:

  • “Only fund what’s immediately commercial.”
  • “Only publish what’s trendy.”
  • “Only study what aligns with the current institutional incentive gradient.”
  • “Only build what can be shipped next quarter.”

And in AI specifically:

  • “Only do work that scales.”
  • “Only do benchmarks.”
  • “Only do applied product wins.”

Flexner isn’t anti-application; he’s anti-premature closure.


Dijkgraaf’s update: society runs on knowledge it can’t fully see anymore

Dijkgraaf’s companion essay takes Flexner’s stance and says, essentially: look around, Flexner won. The modern world is built out of the long tail of basic research.

He gives a crisp late-20th-century example: the World Wide Web began as a collaboration tool for particle physicists at CERN (introduced in 1989, made public in 1993). He ties that to the evolution of grid and cloud computing developed to handle scientific data, technology that now undergirds everyday internet services. Then he makes a claim that matters a lot for AI policy debates: fundamental advances are public goods (i.e., they diffuse beyond any single lab or nation).That’s an especially relevant lens for AI, where:

  • open ideas (architectures, optimization tricks, safety methods) propagate fast,
  • but compute, data, and deployment concentrate power.

If knowledge is a public good, then a society that starves basic research is quietly selling off its future, even if it still “uses” plenty of science in the present.


AI as a case study in “useful uselessness”

Here’s a helpful way to read Flexner in the age of AI:

A) “Useless” questions that became AI infrastructure

Many of the questions that shaped AI looked abstract or niche before they became inevitable:

  • How do high-dimensional models generalize?
  • When does overparameterization help rather than hurt?
  • What is the geometry of optimization landscapes?
  • How can representation learning capture structure without labels?
  • What are the limits of compression, prediction, and inference?

These don’t sound like product requirements. They sound like “useless” theory, until you realize they govern whether your model trains at all, whether it’s robust, whether it leaks private data, whether it can be aligned, and whether it fails safely.

Flexner’s point isn’t that every abstract question pays off. It’s that you can’t pre-identify the ones that will, and trying to do so narrows the search too early.

B) “Tool-making” is often the hidden payoff

Dijkgraaf emphasizes that pathbreaking research yields tools and techniques in indirect ways. (ias.edu)
AI progress has been exactly this: tool-making (optimizers, architectures, pretraining recipes, eval frameworks, interpretability methods, privacy-preserving techniques) that later becomes the platform everyone builds on.

C) The scary twist: usefulness for good and bad

Flexner also notes that discoveries can become instruments of destruction when repurposed. He uses chemical and aviation examples to make the point.

AI has the same dual-use character:

  • The same generative model family can draft medical summaries or automate phishing.
  • The same computer vision advances can improve accessibility or expand surveillance.
  • The same inference tools can find scientific patterns or extract sensitive attributes.

Flexner’s framework doesn’t solve dual-use, but it forces honesty: the ethical challenge isn’t a reason to stop curiosity; it’s a reason to pair curiosity with governance, norms, and safeguards.


A Flexnerian reading of the current AI funding wave

We’re currently living through a paradox that Flexner would recognize instantly:

  1. AI is showered with investment because it’s visibly useful now.
  2. That investment creates pressure to define “research” as whatever improves next quarter’s metrics.
  3. But the next conceptual leap in AI may come from areas that look “useless” relative to today’s dominant paradigm.

If you want better long-horizon AI outcomes (i.e., robustness, interpretability, privacy, security, alignment, and scientific discovery) Flexner would argue you need institutions that protect inquiry that isn’t instantly legible as profitable.

Or in his words, you need “spiritual and intellectual freedom.”


What to do with this (three practical takeaways)

1) Keep a portfolio: fast product work + slow foundational work

Treat research like an ecosystem. If everything must justify itself immediately, you get brittle progress. Flexner’s “no promises” stance is a feature, not a bug.

2) Reward questions, not only answers

Benchmarks matter, but they can also overfit the field’s imagination. Some of the most important AI work right now is about re-framing the question (e.g., what counts as “understanding,” what counts as “alignment,” what counts as “privacy,” what counts as “truthfulness”).

3) Build institutions that protect intellectual risk

Flexner designed the Institute for Advanced Study around the idea that scholars “accomplish most when enabled” to pursue deep work with minimal distraction.
AI needs its own versions of that: spaces where the incentive is insight, not velocity.


AI is not an argument against Flexner (it’s his exhibit A)

If you hold a smartphone, use a search engine, or interact with modern AI systems, you’re touching the compounded returns of yesterday’s “useless” knowledge.

Flexner’s defense isn’t sentimental. It’s strategic: a society that wants transformative technology must also want the conditions that produce it: freedom, patience, and room for ideas that don’t yet know what they’re for. Or, as Dijkgraaf puts it in summarizing Flexner’s view: fundamental inquiry goes to the “headwaters,” and applications follow, slowly, steadily, and often surprisingly.


Main Source: https://www.ias.edu/ideas/2017/dijkgraaf-usefulness

Fall 2025: Topics in Information-Theoretic Cryptography

Since the beginning of this year, I have been developing a course on “Topics in Information-Theoretic Cryptography”. Recently, the course was approved for Fall 2025. I’m very excited to share some research with undergraduate/graduate students! Below, I list some relevant information for the proposed course.

Course Number and Title

ECE598DA: Topics in Information-Theoretic Cryptography

Description

In this course, we will study foundational and recent work on the use of information theory to design and analyze cryptographic protocols. We will begin by studying privacy attacks which motivate strong privacy and security definitions. Then, we will explore the basics of differential privacy and study some core works on zero-knowledge proofs. Finally, we will explore various applications, including watermarking of generative models.

Recommended Textbooks

  • Introduction to Cryptography with Coding Theory. By Wade Trappe, Lawrence C. Washington.
  • Tutorials on the Foundations of Cryptography. Edited by Yehuda Lindell.

Syllabus

Week 1: Introduction: motivations, one-time pad review, review of probability theory

Week 2: Attacks and Composition Theorems for Differential Privacy

Week 3: Standard Mechanisms for Differential Privacy

Week 4: Information-Theoretic Lower Bounds for Differential Privacy

Week 5: Differentially Private Statistical Estimation and Testing

Week 6: Zero-Knowledge Proofs

Week 7: Statistical Zero-Knowledge Proofs: Part I

Week 8: Statistical Zero-Knowledge Proofs: Part II

Week 9: Multi-Party Computation

Week 10: Multi-Party and Computational Differential Privacy

Week 11: Code-Based Cryptography: Part I

Week 12: Code-Based Cryptography: Part II

Week 13: More Applications

  • Watermarking of Generative Models
  • Proof Systems for Machine Learning
  • Bounded-Storage Cryptography
  • Quantum Cryptography

Week 14: Project Presentations