Efficient Signatures for ZKPs

Aug 5th, 2022
Daniel Lubarov

Verifying digital signatures is a problem that arises in a variety of ZK related projects, from Zcash to zkEVMs. Here we will discuss some different options for signature schemes, and strategies for efficient implementations.

Table of Content

Specialized ZK signatures

Given a ZKP system, a simple signature scheme can be constructed as follows. The signer generates a random secret key sk, and hashes it to get the associated public key pk. They then prove, in zero knowledge, that they know some sk such that hash(sk) = pk. Picnic is a variant of this idea, using a block cipher rather than a hash function.

Since the statement being proved involves just a single evaluation of a hash or block cipher, these schemes can be highly efficient, particularly if using arithmetic-friendly primitives such as LowMC or Poseidon.

The caveat here is that the proof must be generated by the user who owns the secret key. If the signature is part of a larger circuit, such as Zcash’s spend circuit, this may be undesirable, as a user cannot outsource proving to another entity without revealing their secret key to that entity.

Native curves

If we wish to support delegated proving, it may be better to use a conventional signature scheme such as ECDSA, and include its verification steps in our circuit. This way the signer does not need to be the prover; a separate prover can prove that they witnessed a valid signature.

To make this efficient, we can choose an elliptic curve such that the base field of the curve matches the “native” field of our argument system. An example of this is the Baby Jubjub curve, whose base field matches the scalar field of alt_bn128. If we use an argument system such as Groth16 with alt_bn128 as our curve, Baby Jubjub arithmetic can be done “natively”.

However, using a curve like Baby Jubjub isn’t always an option. Sometimes we need compatibility with existing tools which support conventional curves. In Polygon Zero, for example, we must support Ethereum’s existing signature scheme, which is ECDSA over secp256k1. In these cases, we must simulate arithmetic in another field using bignum techniques.

Non-native arithmetic: fundamentals

Let F_p be our native field, and suppose we need to do arithmetic mod m. To compute x * y mod m, we can have the prover supply the purported product z, along with k such that x * y = z + k * m holds over the integers. Checking this identity is much more efficient than any deterministic algorithm for modular reduction.

If we wish to ensure that z is in canonical form, we must also check that z < m. However, we can always allow non-canonical encodings when doing intermediate calculations, and canonicalize as a final step.

Now, since our constraints are over F_p, we cannot directly check a constraint like x * y - z - k * m = 0 over the integers; we can only check that it holds mod p. However, by range checking z and k, we can come up with bounds on the constraint. If we can show that a constraint c is bounded such that abs(c) < p, then wrap-around is impossible, so checking that it holds over F_p is equivalent to checking that it holds over the integers.

For example, suppose m = 2^16. Suppose we check that z < 2^16 and m < 2^16, and suppose x and y are previous outputs, so we already know that they are in this range. Then one can easily show abs(x * y - z - k * m) < 2^32. If p is at least 32 bits, then our constraint cannot wrap around, so it suffices to check the constraint over F_p.

Bignum techniques

In practice, m is often around the same size as p, if not larger. Thus we cannot check x * y - z - k * m = 0 directly, as this constraint would fail the abs(c) < p property. To maintain small bounds on our constraints, we must turn to bignum techniques. In particular, we encode x as (x_1, …, x_n) and y as (y_1, …, y_n), where each x_i and y_i has a small size, say 16 bits.

If we were to multiply x * y using grade school multiplication, we would compute each intermediate product x_i * y_j, then split each product into two digits. We would then organize these intermediate product halves by their weight, and add them up in a column-wise manner, splitting off carry digits as we go so they can be propagated to the next column.

In a constraint-based programming model, there’s no direct way to split a larger value into small digits. We can have the prover supply the purported digits, and check them using a weighted sum, but we must also range check each digit to ensure that the weighted sum does not wrap around.

Clearly grade school multiplication would be expensive in our cost model, but there are plenty of tricks we can use to minimize range checks. We will discuss state-of-the-art methods in a future post, but here we will summarize a few basic tricks.

Ignoring high limbs

If m is a power of two, or more generally has the form a^b, we might pick a as our bignum base. The weights of our intermediate products will then be powers of a, and almost half of them will be multiples of a^b, making them congruent to zero mod a^b. Since they have a weight of zero, we can simply ignore them.

Leveraging congruence to small integers

While having weights congruent to zero is ideal, having weights congruent to small integers can also help. Consider the base field of Ed25519, q = 2^255 - 19. If we choose a base like 2^51, we will have pairs of columns whose weights are related by a factor of 2^255, which is 19 mod q. We can combine such columns, multiplying the higher-weight column by 19, to reduce carries.

CRT inspired methods

Instead of verifying x * y = z mod m, we can verify x * y = z mod m_i for each m_i in some moduli set M. If the identity holds mod each m_i, then it holds mod lcm(M). And if abs(x * y - z) < lcm(M), then wrap-around is impossible, so the identity holds as a constraint over the integers.

We typically include p in M, since verifying an identity mod p is “native” and thus almost free. We can also include other special moduli, such as our base, or a power thereof (see “Ignoring high limbs” above). Using several small moduli can also be convenient, since they allow us to shrink the weights in our bignum calculation.

Batch accumulation

It is not necessary to multiply x * y and k * m separately; we can combine all of their intermediate products (along with z) into a single accumulation table to minimize carries.

Furthermore, these methods are not limited to simple field multiplications; they generalize to arbitrary low-degree formulae. For example, we can verify an Edwards curve operation, (x_3, y_3) = (x_1, y_1) + (x_2, y_2), by witnessing k_1, k_2 such that

x_3 (1 + d x_1 x_2 y_1 y_2) - x_1 y_2 + x_2 y_1 = k_1 q,

y_3 (1 - d x_1 x_2 y_1 y_2) - y_1 y_2 - x_1 x_2 = k_2 q,

where q is the order of the base field. This is equivalent to the usual Edwards group law, just rewritten as a constraint over the integers.

With this approach, we need only two accumulation tables (one per constraint), which is much better than a separate accumulation table for each F_q operation.

Conventional optimizations

While some conventional curve optimizations, such as projective coordinates, no longer make sense in a ZK cost model, others can be applied to great effect. For example, we can use windowed multiplication to save on curve additions. If we’re checking an ECDSA or EdDSA signature, we can leverage the fact that one of the multiplications has a fixed base, enabling various preprocessing tricks.

Other optimizations are more curve-specific. secp256k1 supports the GLV endomorphism, for example, which allows us to split a curve multiplication into two smaller ones, opening the door to multi-scalar multiplication methods such as Yao’s.

Conclusion

Signature verification can be expensive in a ZK setting, particularly if we need to support conventional curves which don’t match our native field. To make it efficient, we must forget what we learned in grade school. Constraint-based programming has a unique cost model which calls for entirely new algorithms.

Here we have merely scratched the surface of this new frontier of research. At Polygon, we are working on several papers which will explore some of these techniques in greater depth.


If you’re interested in further discussions on this topic, please consider joining our group chat. The author can also be reached at daniel@delendum.xyz.