
In a previous blog post, we saw Greyhound, a Polynomial Commitment Scheme from standard lattice assumptions. As a last step, we saw that we can actually express Greyhound as a series of dot product constraints. This allowed us to use a proof system, called LaBRADOR, that was introduced in an earlier paper and has sublinear proof sizes for such constraint systems. In this blog post, we will discuss LaBRADOR and its core ideas.
LaBRADOR was a significant improvement in the area of lattice-based proofs, and it showed that there can be a practical alternative to hash-based proof systems when it comes to post-quantum systems. It is transparent, with linear prover and verifier time, and achieves sublinear proof size ($O(\log n)$ for $n$ R1CS constraints) by using recursion. There are several libraries implementing LaBRADOR, like Lattirust/Labrador, LaZeR, icicle-labrador, and Lazarus.
High level overview
Before diving into the details of LaBRADOR, let's start with a very high level overview of the protocol to get the main idea.
The prover wants to prove knowledge of some (short) vectors $\vec{s}_{i}$ that satisfy certain constraints of dot products. First, the prover commits to the vectors and to these dot products.
Then, the verifier needs to check that these commitments have been computed correctly, and that the dot product constraints are indeed satisfied. Instead of asking for all the openings of the $\vec{s}_{i}$ and checking them one by one, the verifier asks the prover to provide a random linear combination of them (called amortized opening). This is a single vector $\vec{z}$, and now the verifier will only have to check this against the commitments and the constraints.
And here is the key observation: the verifier's check at the end is just another case of checking dot products, but with $\vec{z}$ (and some other terms) as the witness. Which means that we can run the protocol again to do this check, with a now shorter witness than before. In practice, this recursion is run six or seven times for best results.
Of course, this is only the outline of the protocol. In reality, we have omitted, among other things, a crucial part of such lattice-based schemes: checking that the vectors are indeed short. We'll deal with this later on and, as we'll see, this can be very tricky.
But first, some background.
Lattice assumptions
For a brief introduction to lattices, their hard problems, and some of the assumptions used in cryptography, check out our blog post on Greyhound. In this post, we'll make use of the M-SIS problem and Ajtai commitments.
LaBRADOR is based on the Module Short Integer Solution (M-SIS) problem.
The SIS problem is about finding a linear combination of vectors in a matrix that results in the zero vector. That linear combination cannot be trivial (i.e. it cannot be the zero vector), and the scalars must be small (i.e. they must be smaller than some defined bound). In the plain SIS, the entries of the vectors are in $\mathbb{Z}_{q}$, while in the Module-SIS, the entries are in a polynomial ring $\mathcal{R}_q$, which we'll define later.
More formally, according to M-SIS, assuming a lattice $\mathbf{B}$ with entries in $\mathcal{R}_{q}$, it is hard to find a vector $\vec{\mathbf{x}} \in \mathcal{R}^{n}$ such that $\mathbf{B}\cdot \vec{\mathbf{x}} = \vec{\mathbf{0}} \mod q$, with $0<\lVert \vec{\mathbf{x}} \rVert \leq \beta$. We'll define the norm $\lVert \cdot \rVert$ used in LaBRADOR in a later section.
Ajtai commitments
Ajtai commitments are at the heart of LaBRADOR. We introduce them in more detail in our previous blog post.
Recall that to commit to a short vector $\vec{\mathbf{s}}$ using a (public) matrix $\mathbf{A}$, the prover must compute the commitment as $\vec{\mathbf{t}} = \mathbf{A} \vec{\mathbf{s}} \in \mathcal{R}_{q}^{\kappa}$.
The dimension $\kappa$ of the commitment can be much shorter than the dimension of $\vec{\mathbf{s}}$. Also note that $\kappa$ is chosen according to the desired security level, and does not depend on the dimension of $\vec{\mathbf{s}}$, which is very beneficial for the asymptotic complexity of constructions with Ajtai commitments.
The binding property is based on the M-SIS problem. Finding a second opening $\vec{\mathbf{s}}'$ for the same commitment $\vec{\mathbf{t}}$ would imply $\mathbf{A}(\vec{\mathbf{s}} - \vec{\mathbf{s}}') = \mathbf{0}$. Hence, $(\vec{\mathbf{s}} - \vec{\mathbf{s}}')$ would be a non-trivial solution to the M-SIS problem.
Polynomial Rings
The polynomial ring LaBRADOR uses is defined as:
$$ \mathcal{R}_{q} = \mathbb{Z}_{q}[X] / (X^d + 1) $$Essentially, it contains all the polynomials of degree up to $d-1$, with coefficients in $\mathbb{Z}_{q}$, and operations between its elements defined modulo $X^d+1$. In practice, LaBRADOR uses $d=64$.
A quick way to reduce the result of polynomial operations modulo $X^d+1$ is by replacing $X^d$ with $-1$ (observe $X^{d} +1\equiv 0 \iff X^{d} \equiv -1$ in the ring).
Using polynomial rings is a pretty standard practice in lattice-based cryptography. It allows us to take advantage of the "structure" they have, which makes operations like multiplications more efficient than their integer counterparts for the same security level. Recall that polynomial multiplication can be very efficient under certain conditions, for example by using the Number Theoretic Transform (NTT), with complexity $O(n \log n)$ compared to the naive $O(n^{2})$.
We will use bold letters for ring elements $\mathbf{a} \in \mathcal{R}_{q}$, bold vectors for vectors of ring elements $\vec{\mathbf{a}} = (\mathbf{a_{1}}, \mathbf{a_{2}}, \dots, \mathbf{a}_{n}) \in \mathcal{R}_{q}^{n}$, and regular font for $\mathbb{Z}_{q}$ elements. We can also represent ring elements by their coefficient vector $\vec{a} = (a_{0}, a_{1}, \dots, a_{d-1}) \in \mathbb{Z}_{q}^{d}$.
In LaBRADOR, "short" refers to the $\ell_{2}$-norm, which is defined for ring elements as:
$$ \lVert \mathbf{a} \rVert =\lVert \mathbf{a} \rVert_{2} = \sqrt{ \lvert a_{0}\rvert^2 + \dots + \lvert a_{d-1} \rvert ^{2} } $$and for vectors of ring elements as:
$$ \lVert \vec{\mathbf{a}} \rVert = \lVert \vec{\mathbf{a}} \rVert _{2} = \sqrt{ \lVert \mathbf{a}_{1} \rVert^2 + \dots + \lVert \mathbf{a}_{n} \rVert ^{2}} $$We also define the dot product $\langle \cdot, \cdot \rangle : \mathcal{R}_{q}^{n} \times \mathcal{R}_{q}^{n} \to \mathcal{R}_{q}$ as:
$$ \langle \vec{\mathbf{a}},\vec{\mathbf{b}} \rangle = \mathbf{a}_{1}\mathbf{b}_{1} + \dots + \mathbf{a}_{n}\mathbf{b}_{n} $$The principal relation
As we've already mentioned, LaBRADOR allows a prover to prove knowledge of $r$ (short) vectors $\vec{\mathbf{s}}_{i}$ that satisfy some dot product constraints. A statement is a set of constraints defined formally as a set of functions $f: \mathcal{R}_{q}^{n} \times \mathcal{R}_{q}^{n} \times \dots \times \mathcal{R}_{q}^{n} \to \mathcal{R}_{q}^{n}$ of the form:
$$ f(\vec{\mathbf{s}}_{1}, \vec{\mathbf{s}}_{2}, \dots, \vec{\mathbf{s}}_{r}) = \sum_{i,j=1}^{r}\mathbf{a}_{i,j}\langle \vec{\mathbf{s}}_{i}, \vec{\mathbf{s}}_{j} \rangle + \sum_{i=1}^{r}\langle \vec{\boldsymbol{\phi}}_{i}, \vec{\mathbf{s}}_{i} \rangle - \mathbf{b} $$We can see that these constraints can be very flexible. The first sum can capture cross-interactions between the $\vec{\mathbf{s}}_{i}$, while the second sum captures individual $\vec{\mathbf{s}}_{i}$ terms. This is reminiscent of other constraint systems allowing for such quadratic constraints, like R1CS. Indeed, any R1CS instance can be transformed into this system.
Formally, our constraint system will be a family $\mathcal{F}$ of such functions, and our principal relation will be:
$$ \mathcal{R}= \left\{ (\mathcal{F}, (\vec{\mathbf{s}}_{1}, \vec{\mathbf{s}}_{2}, \dots, \vec{\mathbf{s}}_{r} )) \;\;\middle|\;\; \begin{array}{m} f(\vec{\mathbf{s}}_{1}, \vec{\mathbf{s}}_{2}, \dots, \vec{\mathbf{s}}_{r}) = \mathbf{0} \;\;\;\;\forall\,\, f \in \mathcal{F} \\ \vec{\mathbf{s}}_{i} \;\text{"short"} \end{array} \right\} $$Once again, we'll defer examining what "short" means for later. For simplicity, we also omit a special type of constraint where we only care about the constant term.
We can now present a basic version of the main protocol.
A (simplified) main protocol
First, we define "short" $\vec{\mathbf{s}}_{i}$ to mean $\sum_{i} \lVert \vec{\mathbf{s}}_{i} \rVert^2 \leq \beta^{2}$, where $\beta$ is a fixed bound. For this version of the protocol, we will assume that we have an interactive subprotocol $\mathsf{NormCheck}\left( \left\{ \vec{\mathbf{s}}_{i} \right\}, \beta^2\right)$ which convinces the verifier that this is true.
The interaction starts with the prover computing one by one the commitments to $\vec{\mathbf{s}}_{i}$, that is $\vec{\mathbf{t}}_{i} = \mathbf{A}\vec{\mathbf{s}}_{i}$, and sends the $\vec{\mathbf{t}}_{i}$ to the verifier.
Then, the subprotocol $\mathsf{NormCheck}\left( \left\{ \vec{\mathbf{s}}_{i} \right\}, \beta^2 \right)$ is executed between the prover and the verifier.
Now, the protocol must address the constraints. To do this, the prover will compute and send the following for $i,j=1,\dots,r$:
- $\mathbf{g}_{ij} = \langle \vec{\mathbf{s}}_{i}, \vec{\mathbf{s}}_{j} \rangle$
- $\mathbf{h}_{i,j} = \frac{1}{2}\left( \langle \vec{\boldsymbol{\phi}}_{i}, \vec{\mathbf{s}}_{j} \rangle + \langle \vec{\boldsymbol{\phi}}_{j}, \vec{\mathbf{s}}_{i} \rangle \right)$
The verifier samples $r$ challenges $\mathbf{c}_{1}, \mathbf{c}_{2}, \dots, \mathbf{c}_{r}$ from a challenge space $\mathcal{C} \subset \mathcal{R}_{q}$ and sends them to the prover.
The challenge space $\mathcal{C}$ must be selected so that all pairwise differences $\mathbf{c}_{i} - \mathbf{c}_{j}$ are invertible in $\mathcal{R}_{q}$. This is important for the security of the proof system. More formally, it's needed for an extractor to exist and thus prove knowledge soundness. In practice, such challenges can be sampled by fixing some coefficients and sampling repeatedly until the challenge has a norm smaller than some threshold.
The prover will then perform what is called amortized opening. That is, instead of sending all the openings $\vec{\mathbf{s}}_{i}$ and have the verifier check them, they will compute and send the linear combination:
$$ \vec{\mathbf{z}} = \mathbf{c}_{1}\vec{\mathbf{s}}_{1} + \dots + \mathbf{c}_{r}\vec{\bf{s}}_{r} $$The verifier checks:
$$ \mathbf{A}\vec{\mathbf{z}} \stackrel{?}{=} \sum_{i=1}^{r}\mathbf{c}_{i}\vec{\mathbf{t}}_{i} $$This is essentially a batch verification that the commitments have been computed correctly by the prover. For security, the verifier also needs to check that $\vec{\mathbf{z}}$ is short.
Finally, the verifier will check if the constraints are satisfied by checking:
$$ \begin{align} &\langle \vec{\mathbf{z}},\vec{\mathbf{z}} \rangle \stackrel{?}{=} \sum_{i,j=1}^{r}\mathbf{g}_{ij}\mathbf{c}_{i}\mathbf{c}_{j}, \;\;\;\; \sum_{i=1}^{r}\langle \vec{\boldsymbol{\phi}}_{i}, \vec{\mathbf{z}} \rangle \mathbf{c}_{i} \stackrel{?}{=} \sum_{i,j=1}^{r}\mathbf{h}_{ij}\mathbf{c}_{i}\mathbf{c}_{j}, \\ \\ & \sum_{i,j=1}^{r}\mathbf{a}_{ij}\mathbf{g}_{ij} + \sum_{i=1}^{r}\mathbf{h}_{ii} - \mathbf{b} \stackrel{?}{=} \mathbf{0} \end{align} $$These three checks verify that the $\mathbf{g}_{ij}$ and $\mathbf{h}_{ij}$ have been computed correctly by the prover, and that the principal relation holds.
This concludes a basic version of the protocol. It is obvious, however, that this is not really efficient. The prover has to send all the commitments at the beginning, and then the polynomials $\mathbf{g}_{ij}, \mathbf{h}_{ij}$, which are $O(r^2)$ ring elements. The verifier then has to explicitly check the dot product constraints. On the bright side, instead of checking all the constraints and commitments one by one, we were able to reduce the number of checks by using the amortized opening technique.
We described the protocol assuming a single constraint $f$. This is still accurate, however, because in the full protocol, all the functions are aggregated into one by taking a random linear combination.
Recursion: a first approach
Looking closely, we can see that the checks the verifier has to do are a set of dot products involving $\vec{\mathbf{z}}, \vec{\mathbf{t}}_{i}$ and the polynomials $\mathbf{g}_{ij}, \mathbf{h}_{ij}$. We also notice that the amortized opening reduced the size of our "main" witness $\left\{ \vec{\mathbf{s}}_{i}\right\}$ by a factor of $r$, since we folded $r$ vectors into one. Instead of having the verifier explicitly compute and check the dot products, why not run the protocol again, with these dot products as constraints, and $\vec{\mathbf{z}}$ as the witness?
This is the core idea of LaBRADOR. Instead of checking directly, we recurse the protocol with the new witness being $\vec{\mathbf{z}}$.
We defined the witness of the relation to be a set of $r$ (short) vectors $\vec{\mathbf{s}}_{i}$. What would we do if instead of many such vectors we only had one (short) vector $\vec{\mathbf{s}}$ we cared about as a witness, like we do in R1CS? We can simply split the vector $\vec{\mathbf{s}}$ into $r$ chunks. "Short" is actually defined with that in mind, since $\sum_{i=1}^{r} \lVert \vec{\mathbf{s}}_{i} \rVert^2_{2} \leq \beta^{2} \iff \lVert \vec{\mathbf{s}} \rVert^{2}_{2} \leq \beta^{2}$. For the recursion, we can do exactly that: concatenate all the different witness parts and then split into $r$ chunks.
Note, however, that the choice of $r$ directly affects the efficiency of the proof system. A big $r$ would have the amortized opening reduce the main witness by a large factor, but it would also increase the number of polynomials $\mathbf{g}_{ij}, \mathbf{h}_{ij}$, which is $O(r^2)$. In practice, for round $i+1$, $r^{(i+1)}$ is selected to be $O(\lvert \vec{\mathbf{s}}^{(i)} \rvert^{1/3})$, resulting in $O(\log \log n)$ rounds of recursion.
Taking full advantage of recursion
While we now benefit from using recursion at the last step of verification, there is still room for improvement. As it is, the prover still needs to send all the commitments $\vec{\mathbf{t}}_{i}$ and all the polynomials $\mathbf{g}_{ij}, \mathbf{h}_{ij}$, which is still a considerable amount of communication.
LaBRADOR takes full advantage of the recursion by using outer commitments. Outer commitments are, essentially, commitments to the commitments. As we've seen, Ajtai commitments are "compressing", which means that by sending these outer commitments, the total size of communication (proof size in the non-interactive case) decreases.
This would make little sense if the verifier had to explicitly check that these outer commitments were computed correctly. In our case, however, we can include these checks as constraints in the recursion, and move the $\vec{\mathbf{t}}_{i}, \mathbf{g}_{ij}, \mathbf{h}_{ij}$ to the witness.
More rigorously, there will be outer commitments for the commitments $\vec{\mathbf{t}}_{i}$, but also for the polynomials $\mathbf{g}_{ij}, \mathbf{h}_{ij}$. The prover will send:
$$ \vec{\mathbf{u}}_{1} = \mathbf{B}\vec{\mathbf{t}} \in \mathcal{R}_{q}^{\kappa_{1}}, \;\;\;\;\vec{\mathbf{u}}_{2} = \mathbf{C}\vec{\mathbf{g}} + \mathbf{D}\vec{\mathbf{h}} \in \mathcal{R}_{q}^{\kappa_{2}} $$where $\vec{\mathbf{t}}$ is a concatenation of all the $\vec{\mathbf{t}}_{i}$, and $\vec{\mathbf{g}}, \vec{\mathbf{h}}$ the vectors containing $\mathbf{g}_{ij}, \mathbf{h}_{ij}$ respectively.
Now the simplified protocol described above will look largely the same, with the difference being that these outer commitments are sent instead, and that there are two extra equations to be checked by the verifier, exactly as they appear above. The new witness will be $\left( \vec{\mathbf{z}}, \vec{\mathbf{t}}, \vec{\mathbf{g}}, \vec{\mathbf{h}} \right)$.
Handling the nuts and bolts of Ajtai commitments
So far in this post we have largely ignored a crucial part of Ajtai commitments, and, by extension, LaBRADOR. We have not defined what it means for the committed vectors to be short or how the verifier checks that this is true. Recall that if this were not true, then the commitment would not be binding.
Predictably, for some bound $\gamma$, we want the committed vector $\vec{\mathbf{x}}$ to have a norm $\lVert \vec{\mathbf{x}} \rVert \leq \gamma$. This bound depends on the desired security level, and is related to the dimension $\kappa$ of the commitment.
In LaBRADOR, the shortness of the $\vec{\mathbf{s}}_i$ is checked using the $\mathsf{NormCheck}$ subprotocol. What we haven't checked so far is that:
- the vectors committed in the outer commitments are short, that is: $\lVert \vec{\mathbf{t}} \rVert \leq \gamma_{1}$ and $\sqrt{ \lVert \vec{\mathbf{g}} \rVert^2 + \lVert \vec{\mathbf{h}} \rVert^2 } \leq\gamma_{2}$
- the vector $\vec{\mathbf{z}}$ is small, that is: $\lVert \vec{\mathbf{z}} \rVert \leq \gamma$.
With the protocol we have so far presented, this is a problem. For instance, while an honest prover actually knows short vectors $\vec{\mathbf{s}}_{i}$, there is no guarantee that the commitment $\vec{\mathbf{t}}_{i} = \mathbf{A}\vec{\mathbf{s}}_{i}$ is short, since $\mathbf{A}$ is a uniformly random matrix, so they cannot commit to it with the outer commitment.
To handle this, LaBRADOR performs decomposition. The coefficients in $\mathbb{Z}_{q}$ of each ring element in the vector are decomposed according to some base $b$. For example, for a base $b_{1}$:
$$ \vec{\mathbf{t}}_{i} = \vec{\mathbf{t}}_{i}^{(0)} + \vec{\mathbf{t}}_{i}^{(1)}b_{1} +\dots + \vec{\mathbf{t}}_{i}^{(t_{1}-1)}b_{1}^{t_{1}-1} $$As a result, we now have $t_{1}$ vectors with coefficients in $\mathbb{Z}_{b_{1}}$, and we can concatenate them to get $\vec{\mathbf{t}}_{i}' \in \mathcal{R}_{q}^{\kappa \cdot t_{1}}$. For a suitable choice of $b_{1}$, this is now a short vector and the prover can commit to this to get a binding Ajtai commitment. Similarly for $\vec{\mathbf{g}}, \vec{\mathbf{h}}$.
LaBRADOR uses decomposition with centered representatives, which means that the coefficients will be in $[-\frac{b_{1}}{2}, \dots, 0, \dots, \frac{b_{1}}{2}]$, instead of the commonly used $[0, \dots, b_{1}-1]$. Here is the decomposition algorithm in Python (little-endian):
def decompose(num, b):
if num == 0:
return [0]
mid = b // 2
res = list()
while num != 0:
m = num % b
if m > mid:
m = m - b
res.append(m)
num = (num - m) // b
return res
For the actual norm checks, we simply have to set a suitable bound $\beta'$ when recursing the protocol with these vectors as witness.
As a side note, the vector $\vec{\mathbf{z}}$ is also decomposed into two parts before recursing, not because it will be committed, but because repeatedly folding it would quickly blow up its coefficients. This changes the $\langle \vec{\mathbf{z}}, \vec{\mathbf{z}} \rangle$ check slightly, as we'll see shortly.
The new main protocol
Integrating all the optimizations detailed in the previous sections, we can now present a version of the main protocol that is very close to the final one.
The prover starts by computing the commitments $\vec{\mathbf{t}}_{i}$ as before, and also computes the polynomials $\mathbf{g}_{ij}, \mathbf{h}_{ij}$. Then, they decompose them and produce the outer commitments $\vec{\mathbf{u}}_{1}, \vec{\mathbf{u}}_{2}$, which are sent to the verifier.
The prover and the verifier engage in the $\mathsf{NormCheck}(\left\{ \vec{\mathbf{s}}_{i} \right\}, \beta^{2})$ subprotocol, as before.
The verifier samples and sends challenges $\mathbf{c}_{1}, \mathbf{c}_{2}, \dots, \mathbf{c}_{r} \in \mathcal{C}$.
The prover responds with $\vec{\mathbf{z}}$ as before.
Finally, the verifier must do the same checks as in the simplified version with some slight differences:
- the check on $\langle \vec{\mathbf{z}}, \vec{\mathbf{z}} \rangle$ will be $\langle \vec{\mathbf{z}}^{(0)}, \vec{\mathbf{z}}^{(0)} \rangle + 2b\langle \vec{\mathbf{z}}^{(1)}, \vec{\mathbf{z}}^{(0)} \rangle + b^{2}\langle \vec{\mathbf{z}}^{(1)}, \vec{\mathbf{z}}^{(1)} \rangle$, since $\vec{\mathbf{z}}$ is decomposed to $\vec{\mathbf{z}}^{(0)} + b \vec{\mathbf{z}}^{(1)}$ before recursing
- there are two added checks that the outer commitments were computed correctly:
The recursion will have the checks above as constraints. The witness will be $\left( \vec{\mathbf{z}}^{(0)}, \vec{\mathbf{z}}^{(1)}, \vec{\mathbf{t}}', \vec{\mathbf{g}}', \vec{\mathbf{h}}' \right)$, and the norm bound $(\beta')^{2}$ will be an aggregate of the individual bounds we saw in the previous section.
The $\mathsf{NormCheck}$ subprotocol
The subprotocol to check that the vectors $\vec{\mathbf{s}}_{i}$ are indeed short is one of the central parts of LaBRADOR. Such a subprotocol is actually found in most lattice-based proof systems, since commitment schemes based on M-SIS assumption have a requirement for the committed vectors to be short, in order for them to be binding.
LaBRADOR uses a protocol based on the modular Johnson-Lindenstrauss lemma. This lemma states that if we project a vector $\vec{s}$ to a lower dimension using a random projection matrix $\Pi$, drawn from a specific distribution, the projected vector $\vec{p}$ will have an $\ell_{2}$-norm that is within some well-defined distance of the $\ell_{2}$-norm of the original vector $\vec{s}$.
This projection achieves a reduction of the dimension while also preserving (almost) the norm of the original vector. This is very useful for our use case: the verifier can send the random projection matrices $\Pi_{i}$, and the prover can reply with the projected vector $\vec{p}_{i}$. Then the verifier can check if $\lVert \vec{p}_{i} \rVert$ is within an allowed distance from the bound $\beta$. Because of the JL lemma, this would imply that the original $\vec{s}_{i}$ is also bounded by $\beta$.
Let's get into the details and see how it works in practice.
First of all, notice that we're working with vectors in $\mathbb{Z}_{q}^{n}$, which means that instead of $\vec{\mathbf{s}}_{i} \in \mathcal{R}_{q}^{n}$, the projection will be applied to the "flattened" vector of coefficients $\vec{s}_{i} \in \mathbb{Z}_{q}^{nd}$.
The projection matrix will be $\Pi: \mathbb{Z}_{q}^{nd} \to \mathbb{Z}_{q}^{256}$. It is a random matrix, and its entries can be $0, 1, -1$ with probabilities $\frac{1}{2}, \frac{1}{4}, \frac{1}{4}$ respectively.
The projection is computed as $\vec{p}_{i} =\Pi \cdot \vec{s}_{i}$. The important result of the modular Johnson-Lindenstrauss lemma is that with very high probability:
$$ \sqrt{ 30 }\lVert \vec{s}_{i} \rVert \leq\lVert \vec{p}_{i} \rVert \leq \sqrt{ 337 } \lVert \vec{s}_{i} \rVert $$LaBRADOR uses this to check for the norms of $\vec{\mathbf{s}}_{i}$, with the prover only having to send a vector of dimension 256 for each of them. The verifier accepts if $\lVert \vec{p}_{i} \rVert \leq \sqrt{ 128 }\beta$. For an honest execution with norm close to $\beta$, this is true with probability around $\frac{1}{2}$, so the honest prover can request new projection matrices until it passes.
There is a small amount of slack between what may be accepted by this check and what the actual bound may be. Specifically, here it would be $\frac{\sqrt{ 128 }}{\sqrt{ 30 }} \approx 2.07$. To intuitively grasp why that's the case, assume a cheating prover has a vector $\vec{s}^*$ of norm $b' > \beta$. We want to see what the largest $b'$ is that the prover can get away with. The best case scenario for the cheating prover would be for the random projection to have the smallest possible norm. From the JL lemma, this would be $\sqrt{ 30 }b'$. For the check to pass, it must hold that $\lVert \vec{p}_{i}^{*} \rVert \leq \sqrt{ 128 }\beta \implies \sqrt{ 30 }b' \leq \sqrt{ 128 }\beta \implies b' \leq \frac{\sqrt{ 128 }}{\sqrt{ 30 }}\beta$. Consequently, the maximum $b'$ for which the cheating prover succeeds will be $b' = \frac{\sqrt{ 128 }}{\sqrt{ 30 }}\beta$, which is around 2 times more than the required bound $\beta$. LaBRADOR compensates for this by selecting appropriate parameters and bounds to ensure the desired security level, even with this slack factor.
Integrating the subprotocol
This approach fits LaBRADOR perfectly, since the projection is essentially a series of dot products between the rows of the projection matrices and the flattened vectors $\vec{s}_{i}$. By adding these dot products as constraints to the system, the verifier can check with very little overhead that the projections were computed correctly.
Thus, the interactive subprotocol $\mathsf{NormCheck}$ in the main protocol above is replaced by the verifier sending random projection matrices, and the prover computing the projections and replying with the projected vectors $\vec{p}_{i}$.
In the principal relation described in the paper, there is a small optimization we only mentioned in passing: there is a set of constraints $f'$ where we only care about the constant term. These constraints are then aggregated to the main constraints $\mathcal{F}$ by taking a random linear combination. The constraints for correct computation of projections are actually such constraints.
Final notes
We saw LaBRADOR, a proof system based on standard lattice assumptions that leverages recursion to achieve sublinear proof size.
The LaBRADOR paper contains various optimizations and discussions on the bounds, as well as a reduction from R1CS to the principal relation, which are just as important as the concepts we selected and presented here. We highly recommend reading the paper for a deeper understanding of the protocol.