Back to all posts

ZNARKs: SNARKs for The Integers

top

$\mathbb{Z}$NARKs - SNARKs for The Integers

Are SNARKs always for computation over finite fields? Turns out no.

Today, we will explore the techniques presented in our recent preprint Fully-Succinct Arguments over the Integers from First Principles, which investigates the construction of SNARKs for circuits over the integers. This work provides a simple, but novel, approach to building efficient proof systems for computations involving whole numbers which sidesteps most of the usual complications of dealing with integers. The techniques enable us to "compile" existing (multilinear) SNARKs into arguments over the integers using a new tool: polynomial commitments with modular remainder.

Applications

Existing SNARKs are (usually) designed for computations over finite fields: satisfiability of circuits or execution of programs over finite fields. A few works have also explored SNARKs over other finite rings, usually $\mathbb{Z}_{2^k}$ i.e. $k$-bit integers. In both cases it means that every value in the computation has a finite a priori bounded size. In this recent work, we introduce the first (fully succinct) SNARK for computations over the integers: a SNARK over the integers allows the circuit assignment (or memory in the case of a program) to take any value in $\mathbb{Z}$. The proof remains sound and succinct no matter how large the values in the computation are.

Computation over the integers has a number of potentially interesting applications, including emulating (even very small) prime fields. The usual reader of our blog is well aware of all the circuit "tricks" which apply for finite fields, but might be less acquainted with computation/circuits over the integers, let us start by exploring a few examples of applications where SNARKs over the integers might be particularly useful.

Efficient Range Checks using Sum-of-Squares

Range checks over the integers do not require decomposing the value into bits, meaning the cost of the range check is independent of the size of the range. The trick is the observation that all non-negative integers can be represented as the sum of four squares, in other words, for any $n \geq 0$ the prover can (efficiently) find $a, b, c, d$ such that:

$$ n = a^2 + b^2 + c^2 + d^2 $$

Using this technique you can prove that $n \in [0, B]$ by proving you know $a_1, b_1, c_1, d_1$ and $a_2, b_2, c_2, d_2$ such that:

$$ n = a_1^2 + b_1^2 + c_1^2 + d_1^2 $$ $$ B - n = a_2^2 + b_2^2 + c_2^2 + d_2^2 $$

In other words, $n \geq 0$ and $B - n \geq 0$. This can be optimized, Couteau, Peters and Pointcheval showed $n \in [\mathsf{low}, \mathsf{high}]$ is equivalent to the existence of $a, b, c$ such that:

$$ 4 \cdot (n - \mathsf{low}) \cdot (\mathsf{high} - n) + 1 = a^2 + b^2 + c^2 $$

So you get range checks for a constant cost of 4 R1CS constraints, or a single PlonKish gate.

Efficient Mixed Field Emulation

We can emulate prime fields inside integer circuits very easily: the values $\bar{x} \in \mathbb{F}_p$ are represented as integers $x \in \mathbb{Z}$ such that $x \equiv \bar{x} \mod p$. Given two values $x, y \in \mathbb{Z}$, we can constrain their product to be $z \equiv x \cdot y \mod p$ by requiring that:

$$ z = x \cdot y - q \cdot p $$

For some $q \in \mathbb{Z}$ chosen by the prover. Similar deal for addition. Note that we do not require that the prover fully reduces the result, because there is no "modulus to overflow" the circuit behaves correctly even if the prover uses a large representative of the equivalence class: he just makes everything slower for himself. If a unique representative is required, you can simply use the (very efficient) range check from above to check that $z \in [0, p - 1]$.

This even allows emulating "dynamically" chosen fields $\mathbb{F}_p$ e.g. where $p$ is provided as a public input. It also allows emulating fields of any size with the same number of constraints and to emulate (and switch between) any number of different fields in the same circuit.

Verifying Computation in RSA Groups

In the example above, we chose the modulus $p$ to be a prime, but it works equally well for composite moduli $N$, for instance RSA moduli $N = p \cdot q$. This allows, for instance, verifying RSA signatures and RSA accumulators inside integer circuits efficiently.

An Intellectual Curiosity: $\mathbb{Q}$-Circuits

It is fairly easy to use a proof system for $\mathbb{Z}$ to prove statements about rational numbers $\mathbb{Q}$: we represent rational numbers as pairs of integers $(n, d)$ representing the fraction $n / d$. Whenever needed we can force $d \neq 0$ by enforcing $d \cdot \mathsf{sign} \in [1, \infty)$ for some $\mathsf{sign} \in \mathbb{Z}$ chosen by the prover and using the "sum of squares" technique to enforce the range check. Note that many non-zero checks can be batched by computing $d_1 \cdot d_2 \cdot \ldots \cdot d_n$ and enforcing that the product is non-zero.

To enforce a product of fractions $n_1 / d_1 \cdot n_2 / d_2 = n_3 / d_3$, we can require:

$$ \begin{aligned} n_3 &= n_1 \cdot n_2 \\ d_3 &= d_1 \cdot d_2 \end{aligned} $$

Division can be similarly enforced as:

$$ \begin{aligned} n_3 &= n_1 \cdot d_2 \\ d_3 &= d_1 \cdot n_2 \end{aligned} $$

Equality can be checked by enforcing:

$$d_1 \cdot n_2 = d_2 \cdot n_1$$

The hardest part is sums of fractions, which require a common denominator:

$$ \begin{aligned} n_3 &= n_1 \cdot f_1 + n_2 \cdot f_2 \\ d_3 &= d_1 \cdot f_1 \\ d_1 \cdot f_1 &= d_2 \cdot f_2 \\ f_1 &\neq 0 \\ f_2 &\neq 0 \end{aligned} $$

For $f_1, f_2$ chosen by the prover.

I am not aware of any practical use of $\mathbb{Q}$-circuits, but it is kind of interesting that you can have a SNARK for a field of characteristic zero.

zkSecurity offers auditing, research, and development services for cryptographic systems including zero-knowledge proofs, MPCs, FHE, and consensus protocols.

Learn more →

Share This Article