This is joint work with Archetype, where a whiteboard version of this post was recorded.
As the quantum threat is looming, we need to start thinking about how to build proof systems that are resistant to quantum computers. This post introduces the field of lattice-based proof systems, and gives a high-level explanation of how the most promising solution, Greyhound, works.
Historically, it has taken almost 20 years to deploy our modern public key cryptography infrastructure. It will take significant effort to ensure a smooth and secure migration from the current widely used cryptosystems to their quantum computing resistant counterparts. Therefore, regardless of whether we can estimate the exact time of the arrival of the quantum computing era, we must begin now to prepare our information security systems to be able to resist quantum computing. – NIST Interagency Report 8105 (April 2016)
The State Of Post-Quantum Zero-Knowledge Proofs
So far, we know that symmetric cryptography is mostly safe as long as we increase the size of their parameters. So instead of using SHA-256, use SHA-512, instead of using AES-128, use AES-256, etc. This is why the first post-quantum signature standards published were hash-based (PQCRYPTO: Initial recommendations of long-term secure post-quantum systems (2015)) and this is also why the proof systems that are based on hash functions (like STARKs, Ligero, Aurora, Fractal, Brakedown, etc.) are safe as long as they use large enough parameters (which is not the case today).
See Grover’s algorithm on why we have to increase parameters https://www.youtube.com/watch?v=RQWpF2Gb-gU
On the other hand, asymmetric cryptography is in trouble. The most common asymmetric cryptography is based on the hardness of factoring large numbers (RSA) or the hardness of computing discrete logarithms (Diffie-Hellman). These problems are not safe against quantum computers, and we need to find alternatives. This includes all of our favorite SNARKs which are based on elliptic curves.
See Shor’s algorithm on why this is the case https://www.youtube.com/watch?v=lvTqbM5Dq4Q
Imagine that it’s fifteen years from now. Somebody announces that he’s built a large quantum computer. RSA is dead. DSA is dead. Elliptic curves, hyperelliptic curves, class groups, whatever, dead, dead, dead. – Daniel J. Bernstein (2016)
Besides hash-based proof systems, the most promising alternative seems to be lattice-based proof systems. These proof systems are based on the hardness of lattice problems, which are believed to be hard even for quantum computers.
Interestingly, there’s already been a large amount of research in the subject, and the results are already promising. Not only the proof sizes are smaller than hash-based ones, the support for folding seems to also be better than all previously known schemes (both in SNARKs and STARKs). This year we were lucky, all the folding teams have been proposing new lattice-based schemes: Binyi from protostar with Latticefold+ and Srinath from Nova with Neo.
[LatticeFold] is one of the first times where a post-quantum SNARK system might actually be better than a pre-quantum SNARK system. – Dan Boneh at ZKProofs (2025)
On the proof system front, many schemes were proposed on shaky assumptions that were later broken, but ended up converging on schemes based on strong assumptions and carrying nice properties. Greyhound is perhaps the culmination of many years of research in the field, and it is the first scheme that seems both efficient and plausibly secure against quantum computers: it is transparent, has a linear prover, a sublinear verifier ($O(\sqrt{N})$), and polylogarithmic proof sizes. Concretely, its proofs are around 50KB.
By the way, unlike a lot of previous research in lattice-based SNARKs, Greyhound is a Polynomial Commitment Scheme (PCS) and not an entire zero-knowledge proof system. This means that it is essentially plug-and-play with today’s proof systems that are based on a PCS at their core (basically all of them).
In addition, as I’m writing these lines, a Rust implementation just dropped (LattiRust), while a few weeks ago Ingonyama was announcing support for Greyhound in their flagship library ICICLE.
A Very Short Introduction To Lattices
A good way to think about lattices is to first think about vector spaces, which are the set of all possible vectors that can be obtained by taking linear combinations of a set of vectors (usually called a basis). If the vectors are linearly independent (none of them can be created from combining the others) and you have enough of them, then you can even span the whole space:
In a vector space we can scale vectors with real numbers (in $\mathbb{R}$), but in a lattice we can only scale them with integers (in $\mathbb{Z}$). This means that the vectors in a lattice are not dense, and form more of a grid:
A more rigorous definition is to consider all linear combinations of $k$ linearly independent vectors $\textbf{b}_i$ (which represent the basis of our lattice):
$$ \mathcal{L} := \left\{ \sum_{i=0}^k z_i \cdot \textbf{b}_i \;|\; \textbf{z} \in \mathbb{Z}^n \right\} $$
If you defined the matrix $\textbf{B} := [b_0 | \cdots | b_{k-1}] \in \mathcal{R}^{n \times k}$ with $\textbf{b}_i$ as columns then you can rewrite the above as:
$$ \mathcal{L} := \{ \textbf{B} \cdot \textbf{z} \;|\; \textbf{z} \in \mathbb{Z}^n \} $$
There are several problems that we think are hard to solve in lattices, so-called hard problems, and on which we build our cryptosystems on top of. The most notable ones are the Shortest Vector Problem (SVP) and the Closest Vector Problem (CVP), and these problems are thought hard to solve even for quantum computers. The SVP is about finding the shortest vector (besides the zero vector) in a lattice:
The CVP is about finding the closest lattice vector to a given non-lattice point:
Both of these problems are worst-case hard problems, meaning that they’re not always hard, but for some parameters that are specially crafted they are.
But for our PCS Greyhound, we won’t use any of these problems. Instead, Greyhound relies on another problem called the Module Short Integer Solution (M-SIS) problem.
The SIS problem is about finding a linear combination of vectors in a matrix that results in the zero vector. That linear combination cannot be trivial (i.e. it cannot be the zero vector), and the scalars must be small (i.e. they must be smaller than some defined bound).
More rigorously, if we have a lattice $B$ with entries in $\mathbb{Z_q}$ then it is about finding a vector $\vec{x}$ such that $B \vec{x} = \vec{0} \mod{q}$ and $|| \vec{x} || \leq \beta$ for some known bound $\beta$ and some defined norm operation $|| \dot ||$ which we’ll ignore in this article.
The M-SIS problem is the same except that we don’t use entries in $Z_q$ but instead in a polynomial ring $R_q$. For most of the explanations in this post the exact definition of how the ring is defined is not important. You can assume that elements of that ring look like polynomials of degree less than $64$ and with coefficients in $\mathbb{Z}_q$ for a prime $q$ of around $2^{32}$ but even that is not important to understand the protocol at a high-level.
Ajtai showed in 1996 that if you can break the SIS (and M-SIS) problem for average cases (random instances sampled from some given distribution), then you can break instances of the worst-case problem SVP (instances that are the worst to solve)! This means that either SIS is secure, or SIS is broken but then SVP is broken as well. Such a security reduction offers a win-win: if it holds, we gain assurance in the cryptographic scheme, if it fails we gain insight and possibly a breakthrough in mathematics.
This is a very strong property, that unlike worst-case hard problems like SVP and CVP can be used to build cryptosystems much more easily as their security can rely on randomly generated instances (sampled from some distribution).
At the core of Greyhound: Ajtai Commitments
Let’s now dive into how Greyhound works. Remember, Greyhound is a PCS, and so we start with a polynomial $f$ that we will have to commit to.
The first step is to see our polynomial $f$ as a chunked polynomial:
$$ \begin{align} f(x) &= f_0 + f_1 \cdot x + f_2 \cdot x^2 + f_3 \cdot x^3 + f_4 \cdot x^4 + \cdots \\ &= f’_0(x) + x^4 f’_1(x) + x^8 f’_2(x) + \cdots \end{align} $$
where:
- $f’_0(x) = f_0 + f_1 \cdot x + f_2 \cdot x^2 + f_3 \cdot x^3$
- $f’_1(x) = f_4 + f_5 \cdot x + f_6 \cdot x^2 + f_7 \cdot x^3$
- …
we can then encode our polynomial as a matrix where the coefficients are inserted in the columns:
$$ \begin{bmatrix} f_0 & f_4 & f_8 & f_{12} \\ f_1 & f_5 & f_9 & f_{13} \\ f_2 & f_6 & f_{10} & f_{14} \\ f_3 & f_7 & f_{11} & f_{15} \end{bmatrix} $$
Our polynomial is now almost ready to be committed.
To commit polynomials, Greyhound relies on Ajtai commitments. An Ajtai commitment works by taking a short vector $\vec{s}$ and computing the commitment as $\vec{t} = \mathbf{A} \cdot \vec{s}$.
The $A$ matrix is a public parameter that can be generated from a seed. So no need to keep the whole thing in memory, and no trusted setup needed. Furthermore such matrix usually look wide but short, meaning that they have a lot of columns but not many rows. This is because the rank of the matrix determines the size of the commitment, and the dimension has to compensate for that.
You can quickly see that such schemes are binding because if you can find two different openings to the same commitment then you can break SIS: if you have $\vec{s} \neq \vec{s’}$ such that $A \vec{s} = A \vec{s’}$ then $A(\vec{s} - \vec{s’}) = \vec{0}$ and you have found a non-trivial solution to the SIS problem. (Also, they are trivially homomorphic for the addition which plays nice with folding schemes.)
Since our polynomial coefficients $f_i$ are not necessarily “small”, we can’t actually commit them as is with the Ajtai commitment scheme. Due to that, we have to base-decompose them before committing. Think binary decomposition, for example. The Greyhound paper uses a function $G^{-1}$ to perform the decomposition so we will use that: $\vec{t} = A\cdot G^{-1}(\vec{s})$, and you can imagine that there is a vector $G = \begin{bmatrix}1 & 2 & 4 & 8 & \cdots\end{bmatrix}$ that can recompose the decomposed vector $G^{-1}(\vec{s})$.
This is actually kind of annoying, and a lot of the complexity in Greyhound comes from having to deal with vectors that are not short. In this article we might sometimes ignore (intentionally or mistakenly) the decomposition and recomposition steps, but a rigorous analysis or a functioning implementation would have to make sure to take care of that.
Now imagine that each column of our polynomial matrix are decomposed as short vectors $\vec{s}_i$ in some base, as explained above. This means we now have the following matrix $\begin{bmatrix}\vec{s}_0 & \vec{s}_1 & \vec{s}_2 & \vec{s}_3\end{bmatrix}$ that represents our decomposed polynomial coefficients in the columns.
Evaluation Protocol
What’s a good PCS without evaluating a polynomial at different points? Let’s write an evaluation of $f$ at $x$ using our matrix of coefficients:
$$ \begin{align} \begin{bmatrix} 1 & x & x^2 & x^3 \end{bmatrix} \cdot \begin{bmatrix} f_0 & f_4 & f_8 & f_{12} \\ f_1 & f_5 & f_9 & f_{13} \\ f_2 & f_6 & f_{10} & f_{14} \\ f_3 & f_7 & f_{11} & f_{15} \end{bmatrix} \cdot \begin{bmatrix} 1 \\ x^4 \\ x^8 \\ x^{12} \end{bmatrix} \\ = \begin{bmatrix} f_0’(x) & f_1’(x) & f_2’(x) & f_3’(x) \end{bmatrix} \cdot \begin{bmatrix} 1 \\ x^4 \\ x^8 \\ x^{12} \end{bmatrix} \\ = f(x) \end{align} $$
If you are not convinced this works, we invite you to write down the result yourself. Note that this technique is actually used in some other proof systems (aurora, fractal, hyrax, ligero, vortex, brakedown, etc.)
Then Greyhound protocol has the prover computes the left-part of the matrix multiplication first:
$\vec{w} = a^\top \cdot G \cdot \begin{bmatrix}\vec{s}_0 & \vec{s}_1 & \vec{s}_2 & \vec{s}_3\end{bmatrix}$
and then lets the verifier compute the other part by themselves:
$y = \langle \vec{w}, \vec{b}\rangle$
where $y = f(x)$ for the evaluation point $x$
But of course, it is not secure because the verifier doesn’t know if the result of the left multiplication $\vec{w}$ was computed correctly.
In the above protocol, the verifier did less work by only computing the right multiplication. For example, one could choose a matrix of size around $\sqrt{N} \times \sqrt{N}$ where $N$ is the degree of the polynomial $f$. With such parameters, the verifier would only have $\sqrt{N}$ amount of work to do, which is sublinear, we like that! The prover also sent $\vec{w}$ which is of size around $\sqrt{N}$ which is a sublinear proof size.
Here $N = 16$ and we indeed chose $r = \sqrt{N} = 4$ (where $r$ is the multiplicity, or width of the matrix) and $m = 4$ (where $m$ is the rank, or height of the matrix) as well.
Fixing The Security Of The Protocol Using A Schwartz-Zippel-Like Check For Matrices
To fix the problem the verifier can send a number of challenges $\vec{c}$. These challenges need to be sent after the prover has committed to the evaluation $y$ and the “left computation” $\vec{w}$ by sending them to the verifier.
Using the Schwartz-Zippel lemma, proof systems check polynomial identities (i.e. a polynomial $f$ is equal to another polynomial $g$) at a single random point. This happens either in the clear, or hidden in a commitment (e.g. KZG). Using this challenge vector $\vec{c}$, the prover can show to the verifier that the computation was done correctly at a random linear combination instead of individual vectors. Hence, one can see this as a Schwartz-Zippel-like check for matrices.
To do that, the prover provides the following “opening” $\vec{z}$:
$$ \vec{z} = \begin{bmatrix}\vec{s}_0 & \vec{s}_1 & \vec{s}_2 & \vec{s}_3\end{bmatrix} \begin{bmatrix}c_0 \\ c_1 \\ c_2 \\ c_3\end{bmatrix} $$
Then the verifier can check that $\vec{z}$ is computed correctly by computing:
$$ A \vec{z} \stackrel{?}{=} \sum_i \vec{c}_i \cdot \vec{t}_i $$
and then that $\vec{w}$ is computed correctly from the $\vec{s}_i$ using the commitments $\vec{t}_i$:
$$ \langle \vec{w}, \vec{c} \rangle \stackrel{?}{=} \langle \vec{a}, \vec{z} \rangle $$
The prover has sent $\vec{w}$ of size $O(\sqrt{N})$, and $\vec{z}$ of size $O(\sqrt{N})$.
Smaller Commitment Intermission
In the protocol above, the commitment of a polynomial is a list of Ajtai commitments $\vec{t}_i$. This is a commitment of size $O(\sqrt{N})$ which is not great. To reduce the size of that commitment, Greyhound uses an additional “outer” commitment that commits to all the previous commitments (they call “inner” commitments) into a single commitment $\vec{u}$.
In our protocol, this means that the prover produces the commitment as:
$$ \vec{u} = B \hat{t} $$
where:
- $B$ is a separate public matrix, similar to $A$, for outer commitments
- $\hat{t}$ is the concatenation of all the decomposed $\vec{t}_i$ (i.e. $G^{-1}(\vec{t}_i)$)
This means that in our evaluation protocol, the prover needs to additionally send the openings $\vec{t}_i$ to the verifier (each of size around $\sqrt{N}$) and the verifier needs to verify that the $\vec{t}_i$ are correct by checking that
$$ B \begin{bmatrix}G^{-1}(\vec{t}_1) & G^{-1}(\vec{t}_2) & \cdots \end{bmatrix} = \vec{u} $$
Note that the proof is still $O(\sqrt{N})$ in size, and the verifier work is still $O(\sqrt{N})$ as well.
Greyhound’s Earlier Breed: LaBRADOR
One could stop here and get $O(\sqrt{N})$ proof size and $O(\sqrt{N})$ verifier work, but we can improve the proof size to polylogarithmic without changing the sublinear verifier time ($O(\sqrt{N})$) by using LaBRADOR.
LaBRADOR was an earlier paper from the same team at IBM research that introduced two things:
- a proof system for a “principal relation”
- a reduction of an R1CS constraint system to that proof system
The principal relation was defined as proving that for many functions $f$ of the following form, the prover knows a list of short vectors $\vec{s}’_i$ (unrelated to Greyhound) such that:
$$ f(\vec{s’}_0, \vec{s’}_1, \cdots) = \sum_{1\leq i,j \leq r} \vec{a’}_{ij} \langle \vec{s’}_i, \vec{s’}_j \rangle + \sum_{i=1}^r \langle \vec{\varphi}_i, \vec{s’}_i \rangle + \vec{b’} = \vec{0} $$
A LaBRADOR prover convinces a verifier without having to send the $\vec{s}’_i$ vectors, but instead sending something much smaller (and have the verifier validate that it passes that principal relation).
It would be great to reduce what we have in Greyhound to LaBRADOR’s principal relation, so that we do not have to send our big proof elements ($\vec{w}, \vec{t}, \vec{z}$) but instead something smaller (and attain that sublinear proof size).
Specifically (and this simplifies things for us), Greyhound only makes use of a subset of the principal relation of LaBRADOR:
$$ \sum_{i=1}^r \langle \vec{\varphi}_i, \vec{s}’_i \rangle + \vec{b}’ = 0 $$
Another way to see this is that LaBRADOR is a proof system for this relation, and while its API is quite general, we only need to use a subset of it.
This is akin to seeing the following matrix multiplication where the vector of $\vec{s}’_i$ is short:
$$ \begin{bmatrix} \vec{\varphi}_0^\top & \vec{\varphi}^\top_1 & \cdots \\ & \cdots \end{bmatrix} \cdot \begin{bmatrix} \vec{s}’_0\\ \vec{s}’_1\\ \cdots \end{bmatrix} = -\vec{b}' $$
So if we can massage what we have in Greyhound into that, we are golden.
At this point, we will not talk about LaBRADOR anymore. If you enjoy this post you can ask us for one on LaBRADOR :)
Fitting Greyhound Into LaBRADOR
Ready for the last mile? Let’s first recap what we had to do as the verifier in Greyhound:
- finish the computation of the evaluation and check that it’s correct: $\langle \vec{b}, \vec{w} \rangle \stackrel{?}{=} y$
- verify that $\vec{w}$ (the left matrix multiplication) is computed correctly: $\langle \vec{c}, \vec{w} \rangle \stackrel{?}{=} \langle \vec{a}, \vec{z} \rangle$
- verify that $\vec{z}$ is computed correctly: $\sum_i c_i \cdot \vec{t}_i \stackrel{?}{=} A \vec{z}$
Next, we will need to use the Kronecker product $\bigotimes$. If you don’t know how this operator works, you can guess through the following example:
Now if we define $\vec{t}$ as the concatenation of all $\vec{t}_i$, we can write all the checks that the verifier has to do as a matrix multiplication:
$$
\begin{bmatrix}
\vec{b}^\top & \vec{0} & \vec{0} \\
\vec{c}^\top & \vec{0} & -\vec{a}^\top \\
\vec{0} & \vec{c}^\top \bigotimes \mathbf{I} & -\mathbf{A}
\end{bmatrix} \cdot
\begin{bmatrix}
\vec{w} \\
\vec{t} \\
\vec{z}
\end{bmatrix} = \begin{bmatrix}
y \\
\vec{0} \\
\vec{0}
\end{bmatrix}
$$
You can probably start to see how this looks like the principal relation written as a matrix multiplication. We just need to:
- see the vector $\begin{bmatrix}\vec{w} \\ \vec{t} \\ \vec{z}\end{bmatrix}$ as a the concatenation of vectors $\vec{s}’_i$ of the same size (let’s say $n$)
- see each row of the matrix as a concatenation of $\vec{\varphi}_i^\top$ of the same size ($n$)
This almost looks like LaBRADOR’s interface! And if we can pass that to LaBRADOR it will give us a better proof size by allowing us not to have to send $\vec{w}, \vec{t}, \vec{z}$ which are obviously large (especially the $\vec{t}$!)
I said almost, because our vector $\begin{bmatrix}\vec{w} \\ \vec{t} \\ \vec{z}\end{bmatrix}$ is not actually short, and LaBRADOR expects a short vector here. So next, let’s ensure that each sub-vector is small!
Making $\vec{t}$ Small
We can use the decomposed $\vec{t}$ instead: $\hat{t} = G^{-1}(\begin{bmatrix}\vec{t}_0 \\ \vec{t}_1 \\\ \cdots\end{bmatrix})$
Oh and BTW, remember that all the verifier had was a commitment of these as $\vec{u}$. So we have to add the verification of these opening as well:
$$ \begin{bmatrix} \colorbox{yellow}{$\vec{0}$} & \colorbox{yellow}{$\mathbf{B}$} & \colorbox{yellow}{$\vec{0}$} \\ \vec{b}^\top & \vec{0} & \vec{0} \\ \vec{c}^\top & \vec{0} & -\vec{a}^\top \\ \vec{0} & \vec{c}^\top \bigotimes \colorbox{yellow}{$\mathbf{G}$} & -\mathbf{A} \end{bmatrix} \cdot \begin{bmatrix} \vec{w} \\ \colorbox{yellow}{$\hat{t}$} \\ \vec{z} \end{bmatrix} = \begin{bmatrix} \colorbox{yellow}{$\vec{u}$} \\ y \\ \vec{0} \\ \vec{0} \end{bmatrix} $$
Making $\vec{w}$ Small
To make the $\vec{w}$ part smaller, we can use the decomposed $\hat{w} = G^{-1}(\vec{w})$. It is not enough to do just that, we also need to ensure that the prover commits to that before receiving the challenges from the verifier (due to the typical soundness issues that arise if you don’t). So the prover will instead send a small commitment $\vec{v} = \mathbf{D} \hat{w}$ (for some public matrix commitment $\mathbf{D}$) instead of $\vec{w}$ and the verifier will need to check that the opening $\hat{w}$ matches:
$$
\begin{bmatrix}
\colorbox{yellow}{$\mathbf{D}$} & \colorbox{yellow}{$\vec{0}$} & \colorbox{yellow}{$\vec{0}$} \\
\vec{0} & \mathbf{B} & \vec{0}\\
\vec{b}^\top \colorbox{yellow}{$\mathbf{G}$} & \vec{0} & \vec{0} \\
\vec{c}^\top \colorbox{yellow}{$\mathbf{G}$} & \vec{0} & -\vec{a}^\top \\
\vec{0} & \vec{c}^\top \bigotimes \mathbf{G} & -\mathbf{A}
\end{bmatrix} \cdot
\begin{bmatrix}
\colorbox{yellow}{$\hat{w}$} \\
\hat{t} \\
\vec{z}
\end{bmatrix} = \begin{bmatrix}
\colorbox{yellow}{$\vec{v}$} \\
\vec{u} \\
y \\
\vec{0} \\
\vec{0}
\end{bmatrix}
$$
Making $\vec{z}$ Small
Finally, the last piece missing is to make the $\vec{z}$ part short. Remember that this vector is computed as:
$$ \vec{z} = \begin{bmatrix}\vec{s}_0 & \vec{s}_1 & \vec{s}_2 & \vec{s}_3\end{bmatrix} \begin{bmatrix}c_0 \\ c_1 \\ c_2 \\ c_3\end{bmatrix} $$
where the $\vec{s}_i$ are already short vectors. The solution is to use challenges that are small enough not to increase the size of the vector. In the Greyhound paper the challenges are sampled from polynomial ring elements that have mostly 0 coefficients, or coefficients close to 0.
Putting It All Together
Finally, we have the following matrix multiplication that includes all the checks that the verifier has to perform on public values ($\mathbf{A}, \mathbf{B}, \mathbf{D}, \mathbf{G}$), statement values ($\vec{u}, \vec{a}, \vec{b}, y$), prover values ($\vec{v}$), verifier values ($\vec{c}$), and finally the witness vector $\begin{bmatrix}\hat{w} & \hat{t} & \vec{z}\end{bmatrix}^\top$:
$$
\begin{bmatrix}
\mathbf{D} & \vec{0} & \vec{0} \\
\vec{0} & \mathbf{B} & \vec{0}\\
\vec{b}^\top \mathbf{G} & \vec{0} & \vec{0} \\
\vec{c}^\top \mathbf{G} & \vec{0} & -\vec{a}^\top \\
\vec{0} & \vec{c}^\top \bigotimes \mathbf{G} & -\mathbf{A}
\end{bmatrix} \cdot
\begin{bmatrix}
\hat{w} \\
\hat{t} \\
\vec{z}
\end{bmatrix} = \begin{bmatrix}
\vec{v} \\
\vec{u} \\
y \\
\vec{0} \\
\vec{0}
\end{bmatrix}
$$
As explained above, all that’s left to do for the prover is to split the witness vector into subvectors $\vec{s}’_i$ of the same size, and the verifier has to split the matrix rows into rows of the same size.
Hopefully you got most of what is happening here. The Greyhound paper introduces more concepts which are equally as important so we invite you to spend more time with the paper if you want to get a deeper understanding of the protocol.
We would like to thank Hoeteck Wee, Miha Stopar, Ngoc Khanh Nguyen, who provided help/feedback.