Formal Verification of ZK Constraint Systems

Sep 4th, 2022
Morgan Thomas and Ventali Tan

Thanks to Eric McCarthy from Kestrel Institute for helping with some sections of this document, and to Grigore Rosu (UIUC), Shashank Agrawal (Delendum), Daniel Lubarov (Delendum), Tim Carstens (Risc Zero), Bolton Bailey (UIUC) and Alessandro Coglio (Kestrel) for many helpful suggestions.

Table of Content

Leading Problem

How do we formally verify that an arithmetic circuit, as used by a zero knowledge proof system, has the desired characteristics, such as soundness, completeness and zero knowledge1?

Soundness of a proving system means that it does not prove statements that are false. Similarly, soundness of the circuit used by a proof system means that the circuits are unsatisfiable when applied to a statement that is not true.

Completeness of a proving system means that it is able to prove all statements that are true. Similarly, completeness of the circuit used by a proof system means that the circuit is satisfiable when applied to a statement that is true.

Soundness and completeness of a circuit, taken together, are equivalent to the statement that the relation denoted by the circuit is the intended relation.

A proof system is zero knowledge if and only if a proof does not reveal any additional information beyond the statement being proven (i.e., the public inputs to the proving process). Any facts straightforwardly implied by the statement being proven are necessarily also implied by the proof, but additional facts not implied by the statement should not be revealed. Additional “non-public” witness data, which may be considered as secret inputs, must not be revealed.

A sound and complete circuit can only be satisfied by the statements which are true in the intended semantics. If the underlying proving system is sound and complete, and if the circuit is correct, then the overall system provides a sound and complete means of proving the intended statements. We can separate the leading problem into two subproblems: verifying the desired characteristics of the underlying proving system, and verifying the soundness and completeness of the circuit. In other words, the circuit denotes the intended relation and the proving system will behave as intended when applied to the circuit.

Our main focus here is on the second subproblem, that of verifying the circuits of a system while assuming the soundness and completeness of the underlying proving system2.

Zero knowledge proof systems often use a mathematical constraint system such as R1CS or AIR to encode a computation. The zero knowledge proof is a probabilistic cryptographic proof that the computation was done correctly. Formal verification of a circuit used in a zero knowledge proof requires (1) a formal specification of the relation the circuit is intended to encode, (2) a formal model of the semantics of the circuit, (3) the circuit itself, (4) the theorem prover, and (5) the mechanized proof of a theorem relating (1), (2) and (3).

Techniques

Using a proof assistant

Without using a proof assistant

Formal Verification for ZK Circuits

Note: There are different ways to axiomatize a problem to prove it. Some categories are denotational semantics, axiomatic semantics, and operational semantics.

Operational semantics is particularly useful for proving things about programs. For example, in the ACL2 theorem prover, a common approach to formalizing programming language semantics is an interpretive operational semantics, i.e. a high-level interpreter, written in the logic of the theorem prover, that formalizes the possible forms of the states of computation and describes how the execution of the constructs of the programming language (e.g. expressions and statements) manipulate the computation states.

Denotational design

Denotational design provides a helpful way of thinking about both problem spaces (general and application-specific).

Denotational design also provides a methodology for defining the requirements of a system, such as a zero-knowledge proving system, in such a way that the requirements can be expressed and proven in a formal system.

Figure 1: denotational design

*A formal version of this diagram is in Sigma^1_1 arithmetization paper ((288) on page 47). This is a commutative diagram. Soundness and completeness of a circuit compiler are both expressed by the statement that this diagram commutes, meaning that all paths through it between the same objects are equal, or in other words, the denotation of the compiled circuit is always equal to the denotation of the spec.

Synthesizing formally verified programs

It is often a challenge to bridge the gap between the theorem we can prove and the code we can execute. For example, we may be able to prove a theorem about a spec, and code that spec, but then not able to prove that the code implements the spec. Or, we may be able to prove a theorem about some code but not able to compile that code to efficient machine code. Or, we may be able to do that, but unable to prove that the machine code has the semantics as the source code.

Here is a summary of some of the ways in which the ecosystem supports correct-by-construction synthesis of efficient code:

Current limitations of formal methods on modern hardware

The State of Current Progress

How to analyze each research effort / development project

If you like the structure we frame in “Leading Problem”, you could analyze each research effort / development project in terms of

  1. the language used for specification;
  2. the language used to model the circuit;
  3. which theorem prover they are using; and
  4. what sorts of theorems are they proving, like soundness, completeness, functional completeness, and zero knowledge.

Other interesting attributes of project concerning formal verification for circuits are:

Future Research and Development Directions

A lot of work needs to be done. There is not enough emphasis placed on formal verification in the security industry.

Based on the observations and arguments presented in this blog post, we think the following will be some interesting directions for future research and development:


If you’re interested in further discussions on this topic or working together on this subject, please consider joining our group chat or reach out to me at ventali@delendum.xyz.

Terminology:

  1. If you write a specification of a computation in a high-level formal language and compile it to a constraint system using a verified or verifying compiler, that is called correct by construction. If you take an existing constraint system and you try to prove properties about it (up to and including soundness and completeness) with respect to a specification in a high-level formal language, that is called post-hoc verification.

Discussion: “Due to Gödel’s incompleteness theorem”

  1. Originally, we put “due to Gödel’s incompleteness theorem”. Alessandro suggested we change to “the undecidability of first-order logic”, because the non-existence of such a proof search algorithm is due to “less than” Gödel’s incompleteness theorem: it is due to the undecidability of first-order logic (which is complete according to Gödel’s completeness theorem). Then second-order and higher-order logic is incomplete, in addition to being undecidable.

  2. Morgan:

    The undecidability of first order logic does not imply that there cannot be a proof search algorithm which proves any given true statement of first order arithmetic. First order logic is computably enumerable, so there can exist a proof search algorithm which would find a proof of any given provable statement. It’s due to the incompleteness of any computably enumerable set of axioms for first order arithmetic that there cannot exist a complete proof search algorithm.

    Let me elaborate on that, please. If you have a complete theory then there can exist a proof search algorithm which would eventually prove any true statement. In such a case, the theory is decidable, since the proof search algorithm will also find a disproof of any false statement. The catch is that first order arithmetic can’t have a complete and true proof theory with a set of axioms that can be written out by an algorithm. That’s Gödel’s incompleteness theorem.

    The fact that first order logic is undecidable is the fact that the logical consequence relation of first order logic is undecidable; there is no algorithm that decides if a given set of axioms entails a given theorem or not. That’s not the relevant point, though. If first order arithmetic was complete, then its logical consequence relation would be decidable.

  3. Alessandro:

    Yes, technically first-order logic is semidecidable, meaning that there can be an enumeration procedure that will eventually find if a formula is a theorem (as Morgan says), but if it is not, it will run forever (so it’s undecidable). I agree that Morgan’s statement is technically correct, but normally in theorem proving we think of undecidability being the main obstacle, rather than incompleteness. (Well, in fact also decidable theories like propositional logic may take impractically long times.) In practical terms, proving things in first-order logic vs. higher-order logic is equally hard, due to undecidability, regardless of first-order being complete and higher-order being incomplete. I think it’s fine if you change back the text to mention Gödel’s incompleteness, but I think it might be slightly better to change the whole sentence to say something like “Also, provability is undecidable in first-order and higher-order logic.” Then it gets at the heart of the problem in theorem proving, which is not only for incomplete logics, but also for complete ones. But I think that either way is fine.

    An alternative phrasing could be “Also, due to undecidability of first-order and higher-order logic, there cannot exist an algorithm that can establish whether a formula is a theorem or not.”

Footnotes

  1. We do not elaborate on the zero knowledge property in the article and will write another article on that. To describe that, you need to get into the concept of private inputs. (The example of correct computation of a hash from a hidden plaintext is a good one for understanding this.) The zero knowledge property is how hard it is to determine the private inputs from the public information. You must encode some hard-to-invert function of a private input in the circuit if you want it to have the zero-knowledge property. The zero-knowledge property is generally a probabilistic measure. However, if the computation is too simple and the output is known, it might become possible to figure out the input with certainty. 

  2. It is worth noting that this problem can be solved in a manner that is independent of the ambient ZK system: it is really just a problem about systems of polynomial equations. 

  3. The transpiler idea makes the most sense when there is a formally defined DSL that is close to the new language you want to define, and then you can transpile from your new language to that DSL. However, that transpiler should really be formally verified as well. 

  4. In our experience of formal verification for systems that have some cryptographic component, we have noticed a few different things:

    1. It’s extremely difficult to formally reason about the security properties of the “whole” system. The system design is usually very complex with a lot of dependencies of different types. Even the cryptographic component, which could be pretty small and simple at a high-level (say encryption, MAC, etc), may be implemented in a very complex way, with support for failures, recovery, backup, etc. As a result, even modeling the entirety of just the crypto component in a formal way is quite challenging. So any formal conclusions we draw ends up being for a very small part of the whole system. Our methods have not yet developed to a point where we know how to reason about the whole system in formal proofs.
    2. Proof assistant languages – at least for crypto protocols – so far have limited expressivity. There are limited tools that can directly prove theorems about a crypto protocol written in C or even Rust. In general, developers of protocols need to handle multiple threads and timing issues and those are hard to formalize. E.g., Gallina (for Coq) doesn’t have such constructs. From that point of view, Gallina is not very developer-friendly and Rust using threads is. Future efforts to reason formally about concurrent software systems may benefit from formalization of existing abstractions for reasoning about such systems, such as process calculi, like for example the π-calculus.

  5. Originally, we put “due to Gödel’s incompleteness theorem”. Alessandro suggested we change to “the undecidability of first-order logic”, because the non-existence of such a proof search algorithm is due to “less than” Gödel’s incompleteness theorem: it is due to the undecidability of first-order logic (which is complete according to Gödel’s completeness theorem). Then second-order and higher-order logic is incomplete, in addition to being undecidable. 

  6. Common Lisp can be compiled into quite efficient code, very close to C or Rust. (However, not as efficient as GPUs. Also, concurrent programming, while available in Common Lisp, is not part of the subset in ACL2 for which a formal logic is defined.) If you write a formal specification that is executable though, then it can usually be compiled into fairly efficient code, all within the language of the theorem prover.