Statistical Zero-Knowledge Proofs

How would you prove you’ve solved a Sudoku puzzle without revealing the solution? You can construct a zero-knowledge proof showing the grid satisfies Sudoku rules (unique numbers per row, column, and box).

This semester, one of my projects involves zero-knowledge proofs. I’ll try to explain what I’ve learned about this amazing concept and its variants (with particular attention to statistical zero-knowledge). Shout out to Boaz’s cryptography class. Zero-knowledge proofs have found profound use in blockchain technology, authentication, privacy, and so on.

Definition

Intuition: Imagine someone wants to prove they know the solution to a complex problem (e.g., a puzzle or how Trump was going to win the election) without revealing the solution. They use a process that convinces the verifier they have the solution without showing it.

A zero-knowledge proof (ZKP) is a method by which one party (the prover) can demonstrate to another party (the verifier) that a specific statement is true without revealing any additional information about the statement itself.

Key Properties of Zero-Knowledge Proofs

There are a few variants of zero-knowledge. Here is one:

  1. Completeness:
    If the statement is true, a honest prover can convince the verifier of its truth:
    • If x \in L and the prover P knows a valid witness w, then for all \epsilon > 0, the honest verifier V will accept the proof with probability at least 1 - \epsilon: \Pr[V(x) = \text{accept} \mid x \in L, w \text{ valid}] \geq 1 - \epsilon.
  2. Soundness:
    If the statement is false, no dishonest prover can convince the verifier that it is true (except with an extremely small probability).
    • If x \not\in L, then no cheating prover P^* can convince the honest verifier V to accept, except with negligible probability: \Pr[V(x) = \text{accept} \mid x \not\in L] \leq \epsilon where \epsilon is a negligible function.,
  3. Zero-Knowledge:
    The verifier learns nothing other than the fact that the statement is true. No information about how or why the statement is true is revealed.
    • For every polynomial-time verifier V^*, there exists a polynomial-time simulator S such that the output of S(x) is computationally indistinguishable from the interaction between P and V^* on input x: {S(x)}_{x \in L} \approx {\text{Transcript}(P \leftrightarrow V^*, x)}_{x \in L}.
TypeDefinitionGuarantee
Perfect Zero-KnowledgeReal and simulated distributions are identical.Holds even against computationally unbounded verifiers.
Statistical Zero-KnowledgeReal and simulated distributions are statistically close (negligible difference).Holds against computationally unbounded verifiers.
Computational Zero-KnowledgeReal and simulated distributions are computationally indistinguishable for polynomial-time verifiers.Holds only against computationally bounded verifiers.

Interactive Zero-Knowledge Proofs

These involve a back-and-forth interaction between the prover and the verifier.

  • Graph Isomorphism:
    Prove that two graphs are isomorphic (structurally identical) without revealing the isomorphism itself. Alice proves to Bob that she knows a way to relabel the nodes of graph A to match graph B.
  • Hamiltonian Cycle Problem:
    Prove that a graph contains a Hamiltonian cycle (a path visiting every vertex exactly once) without revealing the actual cycle.

Non-Interactive Zero-Knowledge Proofs (NIZKs)

These eliminate the need for interaction, enabling the prover to generate a single proof that can be verified multiple times.

  • zk-SNARKs (Succinct Non-Interactive Arguments of Knowledge):
    Widely used in blockchain systems like Zcash to validate transactions while keeping them private. Example: Prove that a transaction is valid (inputs equal outputs) without disclosing amounts or participants.
  • zk-STARKs (Scalable Transparent Arguments of Knowledge):
    A transparent alternative to zk-SNARKs that avoids the need for trusted setups and is more scalable. Example: Used in Ethereum Layer-2 solutions like StarkNet to bundle transaction proofs.
The Fiat-Shamir Heuristic technique to convert interactive proofs into non-interactive ones using cryptographic hash functions.
  • Schnorr Protocol:
    A proof that you know a discrete logarithm of a number without revealing the logarithm itself. Example: Prove ownership of a private key without exposing it (used in Schnorr signatures).

Example Use Cases

Zero-knowledge proofs (ZKPs) come in different forms, with specific examples being applied across theoretical and practical scenarios. Below are some notable examples:

1. Commit-and-Prove Protocols

Combine commitments (binding and hiding data) with zero-knowledge proofs (for example Pedersen Commitments). Prove that you committed to a number x without revealing x but can later open the commitment to verify x.

2. Bulletproofs

Efficient range proofs that demonstrate a value lies within a specific range without revealing the value. Example: Used in Monero to ensure transaction amounts are positive without disclosing the actual amounts.

3. Proofs in Cloud Computing

  • Proof of Retrievability:
    Prove a cloud provider stores your data without downloading it. Example: Used in decentralized storage systems like Filecoin.
  • Proof of Computation:
    Demonstrate the correctness of outsourced computation without revealing inputs or outputs.

4. Secure Voting Protocols

  • Homomorphic Encryption-Based Proofs:
    Prove a vote is valid (e.g., within a candidate set) without revealing the voter’s choice.

5. Knowledge of a Password

  • Example: Authenticate to a server by proving knowledge of a password without transmitting it. SRP Protocol (Secure Remote Password): Verifies a user knows a password without sending the password itself.

Perfect Zero-Knowledge

Perfect Zero-Knowledge is a stronger version of zero-knowledge where the verifier cannot distinguish between the interaction with the actual prover and the simulated interaction, even with unlimited computational power. In other words, the simulator’s output is statistically identical to the real interaction transcript, not just computationally indistinguishable.

Formal Definition

Let (P, V) be a proof system for a language L. The proof system is perfect zero-knowledge if for every polynomial-time verifier V^*, there exists a probabilistic polynomial-time simulator S such that for every x\in L: \Pr[\text{Transcript}(P\leftrightarrow V^*,x)=t]= \Pr[S(x)=t]\forall t,

where:

  • \text{Transcript}(P\leftrightarrow V^*,x) is the transcript of the interaction between P and V^* on input x,
  • S(x) is the simulated transcript generated by S for the same input x.

This implies that the probability distributions of the transcripts from the real interaction and the simulated interaction are exactly the same.

Key Features of Perfect Zero-Knowledge

  1. Statistical Indistinguishability:
    The output of the simulator is statistically indistinguishable from the real transcript, meaning the difference between the two distributions is exactly zero.
  2. Stronger Privacy Guarantees:
    Since the guarantee holds even against verifiers with infinite computational power, it is stronger than computational zero-knowledge, where the indistinguishability only holds for polynomial-time adversaries.

Example of Perfect Zero-Knowledge

The classic Graph Isomorphism Zero-Knowledge Protocol is a perfect zero-knowledge protocol:

  • A prover shows two graphs are isomorphic without revealing the actual isomorphism.
  • The verifier cannot distinguish between a genuine interaction and a simulated one, even with infinite computational power, making it perfect zero-knowledge.

Computational Zero-Knowledge

Computational Zero-Knowledge is a type of zero-knowledge proof where the verifier cannot distinguish between the actual interaction with the prover and the output of a simulator, provided the verifier has limited (polynomial-time) computational power.

This means that the zero-knowledge property relies on the computational infeasibility of distinguishing between the two scenarios, often based on cryptographic hardness assumptions (e.g., the difficulty of factoring large numbers or solving discrete logarithms).

Formal Definition

Let (P,V) be a proof system for a language L. The system is computational zero-knowledge if for every probabilistic polynomial-time (PPT) verifier V^*, there exists a PPT simulator S such that for every x \in L, the distributions: \text{Transcript}(P \leftrightarrow V^*, x)\}_{x \in L} and \{S(x)\}_{x \in L} are computationally indistinguishable. That is, no polynomial-time distinguisher can tell apart the real interaction and the simulated interaction with non-negligible probability.

Key Features of Computational Zero-Knowledge

  1. Computational Indistinguishability:
    The zero-knowledge property holds against adversaries with limited computational power (polynomial-time distinguishers). If the verifier were computationally unbounded, they might be able to differentiate the two distributions.
  2. Cryptographic Assumptions:
    Computational zero-knowledge often relies on assumptions like:
    • The infeasibility of factoring large integers.
    • The hardness of the discrete logarithm problem.
    • Other complexity-theoretic assumptions.
  3. Relaxed Privacy Guarantees:
    Unlike perfect zero-knowledge, where the simulated and real distributions are statistically identical, computational zero-knowledge only guarantees privacy against computationally bounded adversaries.

Examples of Computational Zero-Knowledge

  1. zk-SNARKs:
    Used in blockchain protocols like Zcash to ensure transaction validity without revealing sensitive details. The zero-knowledge property here relies on computational assumptions.
  2. Interactive Proofs with Commitment Schemes:
    Many zero-knowledge protocols use cryptographic commitments (e.g., Pedersen commitments) to hide information during the proof, ensuring the verifier cannot extract more data computationally.

Real-World Importance

Computational zero-knowledge is widely used in practical applications, such as:

  • Cryptocurrencies (e.g., Zcash, zkRollups).
  • Authentication protocols.
  • Privacy-preserving identity verification.

It strikes a balance between strong privacy guarantees and computational efficiency, making it suitable for real-world cryptographic systems.

Statistical Zero-Knowledge

Statistical Zero-Knowledge (SZK) is a type of zero-knowledge proof where the verifier cannot distinguish between the real interaction with the prover and the output of a simulator, even with unlimited computational power. The key difference from perfect zero-knowledge is that the two distributions (real and simulated) are not identical but are statistically close, meaning the difference between them is negligible.

Formal Definition

Let (P,V) be a proof system for a language L. The system is statistical zero-knowledge if for every probabilistic polynomial-time (PPT) verifier V^*, there exists a PPT simulator S such that for every x \in L, the output distributions: \text{Transcript}(P \leftrightarrow V^*, x)\}_{x \in L} and \{S(x)\}_{x \in L} are statistically indistinguishable. This means the statistical distance (or total variation distance) between the two distributions is negligible:

\Delta(\text{Transcript}(P \leftrightarrow V^*, x), S(x)) = \frac{1}{2} \sum_t \left| \Pr[\text{Transcript} = t] - \Pr[S(x) = t] \right| \leq \epsilon,

where \epsilon is a negligible function of the input size.

Key Features of Statistical Zero-Knowledge

  1. Statistical Indistinguishability:
    The difference between the real and simulated transcripts is negligibly small, even for verifiers with unlimited computational power.
  2. Weaker than Perfect Zero-Knowledge:
    Perfect zero-knowledge requires the distributions to be exactly identical, while statistical zero-knowledge allows for a negligible difference.
  3. Stronger than Computational Zero-Knowledge:
    Computational zero-knowledge only guarantees indistinguishability for polynomial-time adversaries, whereas statistical zero-knowledge holds against adversaries with unlimited computational power.
  4. No Dependence on Cryptographic Assumptions:
    SZK is typically not reliant on computational hardness assumptions, unlike computational zero-knowledge.

Examples of Statistical Zero-Knowledge

  1. Quadratic Residuosity Problem:
    Prove that a number x is a quadratic residue modulo N (a composite number) without revealing the factorization of N. The simulator can generate transcripts statistically indistinguishable from those produced during the real interaction.
  2. Graph Isomorphism Problem:
    Prove that two graphs G_1 and G_2 are isomorphic without revealing the isomorphism. The verifier’s view of the interaction can be statistically simulated.

Real-World Applications

While SZK is less common in practical applications compared to computational zero-knowledge, it has theoretical importance in cryptographic protocol design and scenarios where absolute guarantees against powerful adversaries are required.

Some References

[1] Goldreich, Oded (2001). Foundations of Cryptography Volume I. Cambridge University Press.

[2] Murtagh, Jack. https://www.scientificamerican.com/article/wheres-waldo-how-to-prove-you-found-him-without-revealing-where-he-is/

[3] Goldwasser, S.; Micali, S.; Rackoff, C. (1989), “The knowledge complexity of interactive proof systems” (PDF), SIAM Journal on Computing18 (1): 186–208, doi:10.1137/0218012ISSN 1095-7111

Electrical Engineering, Computer Engineering, Computer Science

tl;dr Computer Engineering \approx Electrical Engineering with Computer Science

Recently, I accepted an offer to join the department of Electrical and Computer Engineering (ECE) at the University of Illinois, Urbana-Champaign. While I am already an affiliate of the ECE department, I do not plan on teaching a course until Fall 2025. Tentatively, I am planning on teaching a topics course related to privacy/cryptography. (Will share more details on the planned topics course in a later post.)

I have gotten a few questions (especially from NaijaCoder students) concerning the differences between electrical engineering, computer science, and computer engineering. Let me try to explain what I think the key differences are.

Electrical Engineering (EE)

I see electrical engineering as the parent discipline of electronic and computing devices just as the parent discipline of EE is (probably) physics. EE includes the study of analog and digital devices used for various forms of computation and communication.

Computer Engineering (CE) and Computer Science (CS)

Computer Engineering, to me, is about computing aspects of electrical engineering. In fact, at Illinois, the major was called Electrical Engineering and Computer Science. This page states that in 1969, an electrical engineering/computer science curriculum was unveiled and that that year “An electrical engineering/computer science curriculum [was] established within the department. It will be replaced in 1973 by the computer engineering curriculum.”

So, Computer Engineering (CE) and Computer Science (CS) are closely related fields.

1. Focus of Study

  • Computer Engineering (CE):
    • Combines electrical engineering with computer science.
    • Focuses on hardware and software: how computers and devices are physically constructed and how software interacts with hardware.
    • Includes designing and analyzing microprocessors, circuit boards, and networks, as well as embedded systems like IoT (Internet of Things).
  • Computer Science (CS):
    • Primarily focused on software.
    • Emphasizes programming languages, software development, artificial intelligence, machine learning.

2. Core Curriculum

  • Computer Engineering:
    • Electrical circuits, digital design, microcontrollers, VLSI design (Very Large Scale Integration), robotics, signal processing, and computer architecture.
    • Emphasis on low-level programming that interacts with hardware.
  • Computer Science:
    • Software engineering, computer graphics, artificial intelligence, and machine learning.
    • Emphasis on high-level programming and the development of complex software systems.

3. Skills Developed

  • Computer Engineering:
    • Understanding of both hardware and software systems.
    • Skills in circuit design, chip fabrication, computer architecture, and low-level programming (like C, assembly).
  • Computer Science:
    • Problem-solving and analytical skills for software development.
    • Proficiency in programming languages (e.g., Python, Java, C++), software development life cycle, and working with large-scale systems.

Both fields share knowledge of programming and computer systems, and professionals in both disciplines work in industries like tech, telecommunications, healthcare, and aerospace. However, computer engineers have deeper knowledge of hardware, while computer scientists are more specialized in software development.

Transformers as a Computational Model

Recently, as part of the “Special Year on Large Language Models and Transformers“, the Simons Institute in Berkeley hosted a workshop on “Transformers as a Computational Model”. You can see the schedule here.

I watched (almost) all videos and learned a lot about the limitations and capabilities of transformers. The videos have inspired me to work on one or more projects involving transformers.

P.S. This is my shortest post in a while.