Jekyll2021-08-02T22:21:19+00:00https://dankradfeist.de/feed.xmlDankrad FeistResearcher at Ethereum FoundationInner Product Arguments2021-07-27T23:00:00+00:002021-07-27T23:00:00+00:00https://dankradfeist.de/ethereum/2021/07/27/inner-product-arguments<h1 id="introduction">Introduction</h1>
<p>You might have heard of Bulletproofs: It’s a type of zero knowledge proof that is used for example by Monero, and that does not require a trusted setup. The core of this proof system is the Inner Product Argument <sup id="fnref:2"><a href="#fn:2" class="footnote">1</a></sup>, a trick that allows a prover to convince a verifier of the correctness of an “inner product”. An inner product is the component by component product of two vectors:</p>
<script type="math/tex; mode=display">\vec a \cdot \vec b = a_0 b_0 + a_1 b_1 + a_2 b_2 + \cdots + a_{n-1} b_{n-1}</script>
<p>where <script type="math/tex">\vec a = (a_0, a_1, \ldots, a_{n-1})</script> and <script type="math/tex">\vec b = (b_0, b_1, \ldots, b_{n-1})</script>.</p>
<p>One interesting case is where we set the vector <script type="math/tex">\vec b</script> to be the powers of some number <script type="math/tex">z</script>, i.e. <script type="math/tex">\vec b = (1, z, z^2, \ldots, z^{n-1})</script>. Then the inner product becomes the evaluation of the polynomial</p>
<script type="math/tex; mode=display">f(X) = \sum_{i=1}^{n-1} a_i X^i</script>
<p>at <script type="math/tex">z</script>.</p>
<p>Inner Product Arguments work on <em>Pedersen Commitments</em>. I have previously written about <a href="/ethereum/2020/06/16/kate-polynomial-commitments.html">KZG commitments</a>, and Pedersen commitments are similar in that the commitment is in an elliptic curve. However a difference is that they do not require a trusted setup. Here is a comparison of the KZG commitment scheme and using Pedersen combined with an Inner Product Argument as a Polynomial Commitment Scheme (PCS):</p>
<table>
<thead>
<tr>
<th> </th>
<th>Pedersen+IPA</th>
<th>KZG</th>
</tr>
</thead>
<tbody>
<tr>
<td>Assumption</td>
<td>Discrete log</td>
<td>Bilinear group</td>
</tr>
<tr>
<td>Trusted setup</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>Commitment size</td>
<td>1 Group element</td>
<td>1 Group element</td>
</tr>
<tr>
<td>Proof size</td>
<td>2 log n Group elements</td>
<td>1 Group element</td>
</tr>
<tr>
<td>Verification</td>
<td>O(n) group operations</td>
<td>1 Pairing</td>
</tr>
</tbody>
</table>
<p>Basically, compared to KZG commitments, our commitment scheme is less efficient. Proofs are larger (<script type="math/tex">O(\log n)</script>), which wouldn’t be the end of the world as logarithmic is still very small. But unfortunately, the verifier has to do a linear amount of work, so they are not succinct. This makes them impractical for some applications. However in some cases this can be worked around.</p>
<ul>
<li>One example is my writeup on <a href="/ethereum/2021/06/18/pcs-multiproofs.html">multiopenings</a>. In this case, the trick is that you can aggregate many openings into a single one.</li>
<li>The Halo system <sup id="fnref:1"><a href="#fn:1" class="footnote">2</a></sup>, where the linear cost of many openings is aggregated</li>
</ul>
<p>In both of these examples, the trick is to amortize many openings. If you only want to open a single polynomial, then it’s tough and you have to incur the full cost, though.</p>
<p>However, the big advantage is that Pedersen and Inner Product Arguments come with much fewer assumptions, in particular a pairing is not needed and they don’t require a trusted setup.</p>
<h1 id="pedersen-commitments">Pedersen commitments</h1>
<p>Before we can discuss Inner Product Arguments, we need to discuss the data structure that they operate on: Pedersen commitments. In order to use Pedersen commitments, we need an elliptic curve <script type="math/tex">G</script>. Let’s quickly remind ourselves what you can do in an elliptic curve (I will use additive notation because I think it is the more natural one):</p>
<ol>
<li>You can add two elliptic curve elements <script type="math/tex">g_0 \in G</script> and <script type="math/tex">g_1 \in G</script>:
<script type="math/tex">h = g_0 + g_1</script></li>
<li>You can multiply an element <script type="math/tex">g \in G</script> with a scalar <script type="math/tex">a \in \mathbb F_p</script>, where <script type="math/tex">p</script> is the curve order of <script type="math/tex">G</script> (i.e. the number of elements):
<script type="math/tex">h = a g</script></li>
</ol>
<p>There is no way to compute the “product” of two curve elements: the operation “<script type="math/tex">h * h</script>” is not defined, so you cannot compute “<script type="math/tex">h * h = a g * a g = a^2 g</script>”; as opposed to multiplying by a scalar; so <script type="math/tex">2 h = 2 a g</script>, for example, is easy to compute.</p>
<p>Another important property is that there is no efficient algorithm to compute “discrete logarithms”. The meaning of this is that given <script type="math/tex">h</script> and <script type="math/tex">g</script> with the property that <script type="math/tex">h=ag</script>, if you don’t know <script type="math/tex">a</script> it is computationally infeasible to find <script type="math/tex">a</script>. We call <script type="math/tex">a</script> the discrete logarithm of <script type="math/tex">h</script> with respect to <script type="math/tex">g</script>.</p>
<p>Pedersen commitments make use of this infeasibility to construct a commitment scheme. Let’s say you have two points <script type="math/tex">g_0</script> and <script type="math/tex">g_1</script> and their discrete logarithm with respect to each other (i.e. the <script type="math/tex">x \in \mathbb F_p</script> such that <script type="math/tex">g_1 = x g_0</script>) is unknown, then we can commit to two numbers <script type="math/tex">a_0, a_1 \in \mathbb F_p</script>:</p>
<script type="math/tex; mode=display">C = a_0 g_0 + a_1 g_1</script>
<p><script type="math/tex">C</script> is an element of the elliptic curve <script type="math/tex">G</script>.</p>
<p>To reveal the commitment, the prover gives the verifier the numbers <script type="math/tex">a_0</script> and <script type="math/tex">a_1</script>. The verifier computes <script type="math/tex">C</script> and if it matches will accept.</p>
<p>The central property of a commitment scheme is that it is binding. So given <script type="math/tex">C=a_0 g_0 + a_1 g_1</script>, could a cheating prover come up with <script type="math/tex">b_0, b_1 \in \mathbb F_p</script> such that the verifier will accept them, i.e. such that <script type="math/tex">C = b_0 g_0 + b_1 g_1</script> but with <script type="math/tex">b_0, b_1 \not= a_0, a_1</script>?</p>
<p>If someone can do this, then they could also find the discrete logarithm. Here is why: We know that <script type="math/tex">a_0 g_0 + a_1 g_1 = b_0 g_0 + b_1 g_1</script>, and by regrouping the terms on both sides of the equation we get</p>
<script type="math/tex; mode=display">(a_0 - b_0) g_0 = (b_1 - a_1) g_1</script>
<p>Either <script type="math/tex">a_0 - b_0</script> or <script type="math/tex">b_1 - a_1</script> have to be not equal to zero. Let’s say it’s <script type="math/tex">a_0 - b_0</script>, then we get:</p>
<script type="math/tex; mode=display">g_0 = \frac{b_1 - a_1}{a_0 - b_0} g_1 = x g_1</script>
<p>for <script type="math/tex">x = \frac{b_1 - a_1}{a_0 - b_0}</script>. Thus we’ve found <script type="math/tex">x</script>. Since we know this is a hard problem, in practice no attacker can perform this.</p>
<p>This means it’s computationally infeasible for an attacker to find alternative <script type="math/tex">b_0, b_1</script> to reveal for the commitment <script type="math/tex">C</script>. (They definitely do exist, they are just computationally infeasible to find – similar to finding a collision for a hash function).</p>
<p>We can generalize this and commit to a vector, i.e. a list of scalars <script type="math/tex">a_0, a_1, \ldots, a_{n-1} \in \mathbb F_p</script>. We just need a “basis”, i.e. an equal number of group elements that don’t have known discrete logarithms between them. Then we can compute the commitment</p>
<script type="math/tex; mode=display">C = a_0 g_0 + a_1 g_1 + a_2 g_2 + \ldots + a_{n-1} g_{n-1}</script>
<p>This gives us a vector commitment, although with quite a bad complexity: In order to reveal any element, all elements of the vector have to be revealed. But there is one redeeming property: The commitment scheme is additively homomorphic. This means that if we have another commitment <script type="math/tex">D = b_0 g_0 + b_1 g_1 + b_2 g_2 + \ldots + b_{n-1} g_{n-1}</script>, then it’s possible to just add the two commitments to get a new commitment to the sum of the two vectors <script type="math/tex">\vec a</script> and <script type="math/tex">\vec b</script>:</p>
<script type="math/tex; mode=display">C + D = (a_0 + b_0) g_0 + (a_1 + b_1) g_1 + (a_1 + b_1) g_2 + \ldots + (a_{n-1} + b_{n-1}) g_{n-1}</script>
<p>Thanks to this additive homomorphic property, this vector commitment actually turns out to be useful.</p>
<h1 id="inner-product-argument">Inner Product Argument</h1>
<p>The basic strategy of the Inner Product Argument is “divide and conquer”: Take the problem and instead of completely solving it, turn it into a smaller one of the same type. At some point, it becomes so small that you can simply reveal everything and prove that the instance is correct.</p>
<p>At each step, the problem size halves. This ensures that after <script type="math/tex">\log n</script> steps, the problem is reduced to size one, so it can be proved trivially.</p>
<p>The idea is that we want to prove that a commitment <script type="math/tex">C</script> is of the form</p>
<script type="math/tex; mode=display">C = \vec a \cdot \vec g + \vec b \cdot \vec h + (\vec a \cdot \vec b) q</script>
<p>where <script type="math/tex">\vec g = (g_0, g_1, \ldots, g_{n-1})</script> and <script type="math/tex">\vec h = (h_0, h_1, \ldots, h_{n-1})</script> as well as <script type="math/tex">q</script> are our “basis”, i.e. they are group elements in <script type="math/tex">G</script> and none of their discrete logarithms with respect to each other are known. We also introduced the new notation <script type="math/tex">\vec a \cdot \vec g</script> for a product between a vector of scalars (<script type="math/tex">\vec a</script>) and another vector of group elements (<script type="math/tex">\vec g</script>), and it is defined as</p>
<script type="math/tex; mode=display">\vec a \cdot \vec g = a_0 g_0 + a_1 g_1 + \cdots + a_{n-1} g_{n-1}</script>
<p>So essentially, we are proving that <script type="math/tex">C</script> is a commitment to</p>
<ul>
<li>a vector <script type="math/tex">\vec a</script> with basis <script type="math/tex">\vec g</script></li>
<li>a vector <script type="math/tex">\vec b</script> with basis <script type="math/tex">\vec h</script> and</li>
<li>their inner product <script type="math/tex">\vec a \cdot \vec b</script> with respect to the basis <script type="math/tex">q</script>.</li>
</ul>
<p>This in itself does not seem very useful – in most applications we want the verifier to know <script type="math/tex">\vec a \cdot \vec b</script>, and not just have it hidden in some commitment. But this can be remedied with a small trick which I will come to below.</p>
<h2 id="the-argument">The argument</h2>
<p>We want the prover to convince the verifier that <script type="math/tex">C</script> is of the form <script type="math/tex">C = \vec a \cdot \vec g + \vec b \cdot \vec h + (\vec a \cdot \vec b) q</script>. As I mentioned before, instead of doing this outright, we will only reduce the problem by computing another commitment <script type="math/tex">C'</script> in such a way that if the property holds for <script type="math/tex">C'</script>, then it also holds for <script type="math/tex">C</script>.</p>
<p>In order to do this, the prover and the verifier play a little game. The prover commits to certain properties, after which the verifier sends a challenge, which leads to the next commitment <script type="math/tex">C'</script>. Describing it as a game does not mean the proof has to be interactive though: The Fiat-Shamir construction allows us to turn interactive proofs into non-interactive one, by replacing the challenge with a collision-resistant hash function of the commitments.</p>
<h3 id="statement-to-prove">Statement to prove</h3>
<p>The commitment <script type="math/tex">C</script> has the form <script type="math/tex">C = \vec a \cdot \vec g + \vec b \cdot \vec h + (\vec a \cdot \vec b) q</script> with respect to the basis given by <script type="math/tex">\vec g, \vec h, q</script>. We call the fact that <script type="math/tex">C</script> has this form the “Inner Product Property”.</p>
<h3 id="reduction-step">Reduction step</h3>
<p>Let <script type="math/tex">m = \frac{n}{2}</script></p>
<p>The prover computes</p>
<script type="math/tex; mode=display">z_L = a_m b_0 + a_{m+1} b_1 + \cdots + a_{n-1} b_{m-1} = \vec a_R \cdot \vec b_L \\
z_R = a_0 b_m + a_{1} b_{m+1} + \cdots + a_{m-1} b_{n-1} = \vec a_L \cdot \vec b_R</script>
<p>where we’ve defined <script type="math/tex">\vec a_L</script> as the “left half” of the vector <script type="math/tex">\vec a</script> and <script type="math/tex">\vec a_R</script> the “right half” and analogously for <script type="math/tex">\vec b</script>.</p>
<p>Then the prover computes the following commitments:</p>
<script type="math/tex; mode=display">C_L = \vec a_R \cdot \vec g_L + \vec b_L \cdot \vec h_R + z_L q \\
C_R = \vec a_L \cdot \vec g_R + \vec b_R \cdot \vec h_L + z_R q \\</script>
<p>and send them to the verifier. Then the verifier sends the challenge $x \in F_p$ (when using the Fiat-Shamir construction to make this non-interactive, this means that $x$ would be the hash of <script type="math/tex">C_L</script> and <script type="math/tex">C_R</script>). The prover uses this to compute the updated vectors</p>
<script type="math/tex; mode=display">\vec a' = \vec a_L + x \vec a_R \\
\vec b' = \vec b_L + x^{-1} \vec b_R</script>
<p>which have half the length.</p>
<p>Now the verifier computes the new commitment:</p>
<script type="math/tex; mode=display">C' = x C_L + C + x^{-1} C_R</script>
<p>as well as the updated basis</p>
<script type="math/tex; mode=display">\vec g' = \vec g_L + x^{-1} \vec g_R \\
\vec h' = \vec h_L + x \vec h_R</script>
<p>Now, <em>if</em> the new commitment <script type="math/tex">C'</script> has the property that it is of the form <script type="math/tex">C' = \vec a' \cdot \vec g' +\vec b' \cdot \vec h' + \vec a' \cdot \vec b' q</script> – then the commitment <script type="math/tex">C</script> fulfills the originall claim.</p>
<p>All the vectors have halved in size – so we have achieved something. From here we replace <script type="math/tex">C:=C'</script>, <script type="math/tex">\vec g := \vec g'</script> and <script type="math/tex">\vec h := \vec h'</script> and repeat this step.</p>
<p>I will below go through the maths on why this works, but Vitalik also made a nice <a href="https://twitter.com/VitalikButerin/status/1371844878968176647">visual representation</a> that I recommend to get an intuition.</p>
<h3 id="final-step">Final step</h3>
<p>When we repeat the step above, we will reduce <script type="math/tex">n</script> by a factor of two each time. At some point, we will encounter <script type="math/tex">n=1</script>. At this point we don’t repeat the step anymore. Instead the prover will send <script type="math/tex">\vec a</script> and <script type="math/tex">\vec b</script>, which in fact are now only a single scalar each. Then the verifier can simply compute</p>
<script type="math/tex; mode=display">D = a g + b h + a b q</script>
<p>and accept the statement if this is indeed equal to <script type="math/tex">C</script>, or reject if it is not.</p>
<h3 id="correctness-and-soundness">Correctness and soundness</h3>
<p>Above I claimed that if <script type="math/tex">C'</script> has the desired form, then it follows that <script type="math/tex">C</script> also has it. I now want to show why this is the case. In order to do this, we need to look at two things:</p>
<ul>
<li><em>Correctness</em> – i.e. given a prover who follows the protocol, can they always convince the verifier that the statement is correct; and</li>
<li><em>Soundness</em> – i.e. a dishonest prover cannot convince the verifier of an incorrect statement, except with a very small probability.</li>
</ul>
<p>Let’s start with correctness. This assumes that the prover is doing everything according to the protocol. Since the prover is following the protocol, we know that <script type="math/tex">C = \vec a \cdot \vec g + \vec b \cdot \vec h + (\vec a \cdot \vec b) q</script> with respect to the basis given by <script type="math/tex">\vec g, \vec h, q</script>. We need to show that then <script type="math/tex">C'= \vec a' \cdot \vec g' +\vec b' \cdot \vec h' + \vec a' \cdot \vec b' q</script>.</p>
<p>The verifier computes <script type="math/tex">C' = x C_L + C + x^{-1} C_R</script>.</p>
<script type="math/tex; mode=display">C' = x C_L + C + x^{-1} C_R \\
= x ( \vec a_R \cdot \vec g_L + \vec b_L \cdot \vec h_R + z_L q) \\
+ \vec a_L \cdot \vec g_L + \vec a_R \cdot \vec g_R + \vec b_L \cdot \vec h_L + \vec b_R \cdot \vec h_R + \vec a \cdot \vec b q \\
+ x^{-1} (\vec a_L \cdot \vec g_R + \vec b_R \cdot \vec h_L + z_R q)\\
= (x \vec a_R + \vec a_L)\cdot(\vec g_L + x^{-1} \vec g_R) \\
+ (\vec b_L + x^{-1} \vec b_R)\cdot(\vec h_L + x \vec h_R) \\
+ (x z_L + \vec a \cdot \vec b + x^{-1} z_R) q \\
= (x \vec a_R + \vec a_L)\cdot \vec g' + (\vec b_L + x^{-1} \vec b_R)\cdot \vec h' + (x z_L + \vec a \cdot \vec b + x^{-1} z_R) q</script>
<p>So in order for the commitment to have the Inner Product Property, we need to verify that <script type="math/tex">(x \vec a_R + \vec a_L) \cdot (\vec b_L + x^{-1} \vec b_R) = z_L + \vec a \cdot \vec b + x^{-1} z_R</script>. This is true because</p>
<script type="math/tex; mode=display">(x \vec a_R + \vec a_L) \cdot (\vec b_L + x^{-1} \vec b_R) \\
= x \vec a_R \cdot \vec b_L + \vec a_L \cdot \vec b_L + \vec a_R \cdot \vec b_R + x^{-1} \vec a_L \cdot \vec b_R \\
= x z_L + \vec a \cdot \vec b + x^{-1} z_R</script>
<p>This concludes the proof of correctness. Now in order to prove soundness, we need the property that a prover can’t start with a commitment <script type="math/tex">C</script> that does not fulfill the Inner Product Property and end up with a <script type="math/tex">C'</script> that does by going through the reduction step.</p>
<p>So let’s assume that the prover committed to <script type="math/tex">C=\vec a \cdot \vec g + \vec b \cdot \vec h + r q</script> for some <script type="math/tex">r \not= \vec a \cdot \vec b</script>. If we go through the same process as before, we find</p>
<script type="math/tex; mode=display">C' = (x \vec a_R + \vec a_L)\cdot \vec g' + (\vec b_L + x^{-1} \vec b_R)\cdot \vec h' + (x z_L + r + x^{-1} z_R) q</script>
<p>So now let’s assume that the prover managed to cheat, and thus <script type="math/tex">C'</script> fulfills the Inner Product Property. That means that</p>
<script type="math/tex; mode=display">(x \vec a_R + \vec a_L) \cdot (\vec b_L + x^{-1} \vec b_R) = x z_L + r + x^{-1} z_R</script>
<p>Expanding the left hand side, we get</p>
<script type="math/tex; mode=display">x \vec a_R \cdot \vec b_L + \vec a \cdot \vec b + x^{-1} \vec a_L \cdot \vec b_R = x z_L + r + x^{-1} z_R</script>
<p>Note that the prover can choose <script type="math/tex">z_L</script> and <script type="math/tex">z_R</script> freely, so we cannot assume that they will be according to the above definitions.</p>
<p>Multiplying by <script type="math/tex">x</script> and moving everything to one side we get a quadratic equation in <script type="math/tex">x</script>:</p>
<script type="math/tex; mode=display">x^2 ( \vec a_R \cdot \vec b_L - z_L) + x (\vec a \cdot \vec b - r) + (\vec a_L \cdot \vec b_R - z_R )</script>
<p>Unless all the terms are zero, this equation has at most two solutions <script type="math/tex">x \in \mathbb F_p</script>. But the verifier chooses <script type="math/tex">x</script> after the prover has already committed to their values <script type="math/tex">r</script>, <script type="math/tex">z_L</script> and <script type="math/tex">z_R</script>. The probability that the prover can successfully cheat is thus extremely small; we typically choose the field <script type="math/tex">\mathbb F_p</script> to be of size ca. <script type="math/tex">2^{256}</script>, so the probability that the verifier chooses a value for <script type="math/tex">x</script> such that this equation holds, when the values were not chosen according to the protocol, is vanishingly small.</p>
<p>This concludes the soundness proof.</p>
<h2 id="only-compute-basis-changes-at-the-end">Only compute basis changes at the end</h2>
<p>The verifier has to do two things each round: Compute the challenge <script type="math/tex">x</script> and compute the updates bases <script type="math/tex">\vec g'</script> and <script type="math/tex">\vec h'</script>. However, updating the basis <script type="math/tex">g</script> at every round is inefficient. Instead the verifier can simply keep track of the challenge values <script type="math/tex">x_1</script>, <script type="math/tex">x_2</script>, up to <script type="math/tex">x_{\ell}</script> that they will encounter during the <script type="math/tex">k</script> rounds.</p>
<p>Let’s call the basis after round <script type="math/tex">k</script> <script type="math/tex">\vec g_k, \vec h_k</script>. The elements <script type="math/tex">g_\ell</script> and <script type="math/tex">h_\ell</script> are scalars (or vectors of length one) because we end the protocol once our vectors have reached length one. Computing <script type="math/tex">g_\ell</script> from <script type="math/tex">\vec g_0</script> is a multiscalar multiplication (MSM) of length <script type="math/tex">n</script>. The scalar factors for <script type="math/tex">\vec g_0</script> are the coefficients of the polynomial</p>
<script type="math/tex; mode=display">f_g(X) = \prod_{i=0}^\ell \left(1+x^{-1}_{\ell-j} X^{2^{j}}\right)</script>
<p>and the scalar factors for <script type="math/tex">\vec h_0</script> are given by</p>
<script type="math/tex; mode=display">f_h(X) = \prod_{i=0}^\ell \left(1+x_{\ell-j} X^{2^{j}}\right)</script>
<h1 id="using-inner-product-arguments-to-evaluate-polynomials">Using Inner Product Arguments to evaluate polynomials</h1>
<p>For our main application – evaluating a polynomial defined by <script type="math/tex">f(x) = \sum_{i=1}^{n-1} a_i x^i</script> at a point <script type="math/tex">z</script> – we want to make some small additions to this protocol.</p>
<ul>
<li>Most importantly, we want to know the result <script type="math/tex">f(z) = \vec a \cdot \vec b</script>, and not just that <script type="math/tex">C</script> has the “Inner Product Property”</li>
<li><script type="math/tex">\vec b = (1, z, z^2, ..., z^{n-1})</script> is known to the verifier. We can thus make things a bit easier by removing it from the commitment</li>
</ul>
<h2 id="how-to-construct-the-commitment">How to construct the commitment</h2>
<p>If we want to verify a polynomial evaluation for the polynomial <script type="math/tex">f(x) = \sum_{i=1}^{n-1} a_i x^i</script>, then we are typically working from a commitment <script type="math/tex">F = \vec a \cdot \vec g</script>. The prover would send the verifier the evaluation <script type="math/tex">y=f(z)</script>.</p>
<p>So it seems like the verifier can just compute the initial commitment <script type="math/tex">C=\vec a \cdot \vec g + \vec b \cdot \vec h + \vec a \cdot \vec b q = F + \vec b \cdot \vec h + f(z) q</script>, since the know <script type="math/tex">\vec b = (1, z, z^2, ..., z^{n-1})</script>, and start the protocol.</p>
<p>But not so fast. In most cases, <script type="math/tex">F</script> will be a commitment that is generated by the prover. A malicious prover could cheat by, for example, committing to <script type="math/tex">F = \vec a \cdot \vec g + tq</script>. In this case, they would be able to prove that <script type="math/tex">f(z) = y - t</script>, because they have effectively shifted the result.</p>
<p>To prevent this, we need to do a small change to the protocol. After receiving the commitment <script type="math/tex">F</script> and the evaluation <script type="math/tex">y</script>, the verifier generates a scalar <script type="math/tex">w</script> and rescales the basis <script type="math/tex">q:=wq</script>. Afterwards the protocol can proceed as usual. Because the prover can’t predict what <script type="math/tex">w</script> is going to be, they can’t succeed (except with very small probability) at manipulating the result to be something other then <script type="math/tex">f(z)</script>.</p>
<p>Note that we also need to stop the prover from manipulating the vector <script type="math/tex">\vec b</script> if what we want is a generic inner product – but for a polynomial evaluation, we can simply get rid of that part alltogether so I won’t go into the details.</p>
<h2 id="how-to-get-rid-of-the-second-vector">How to get rid of the second vector</h2>
<p>Note that the verifier knows the vector <script type="math/tex">\vec b = (1, z, z^2, ..., z^{n-1})</script>. Given the challenges <script type="math/tex">x^{(0)}, x^{(1)}, ..., x^{(k-1)}</script> they can simply compute the final result <script type="math/tex">b</script> using the same technique as the “basis change at the end”.</p>
<p>We can thus remove the second vector from all commitments and simply compute <script type="math/tex">b^{(k-1)}</script>. This means the prover has to be able to compute the final version <script type="math/tex">b_\ell</script> from the initial vector <script type="math/tex">\vec b_0 = (1, z, z^2, ..., z^{n-1})</script>. Since the folding process for <script type="math/tex">\vec b</script> is the same as that for the basis vector <script type="math/tex">\vec g</script>, the coefficients of the previously defined polynomial <script type="math/tex">f_g</script> will define the linear combination, in other words <script type="math/tex">b_0=f_g(z)</script>.</p>
<h2 id="creating-an-ipa-for-a-polynomial-in-coefficient-form">Creating an IPA for a polynomial in coefficient form</h2>
<p>So far, we have used an Inner Product Argument to evaluate a polynomial that is committed to by its coefficients, which are the <script type="math/tex">f_i</script> for a polynomial defined by <script type="math/tex">f(X) = \sum_{i=0}^{n-1} f_i X^i</script>. However, often we want to work with a polynomial that is defined using its evaluations on a domain <script type="math/tex">x_0, x_1, \ldots, x_{n-1}</script>. Since any polynomial of degree less than <script type="math/tex">n-1</script> is uniquely defined by the evaluations <script type="math/tex">f(x_0), f(x_1), \ldots, f(x_{n-1})</script> these two are completely equivalent. However, transforming between the two can be computationally expensive: it costs <script type="math/tex">O(n \log n)</script> operations if the domain admits an efficient Fast Fourier Transform, and otherwise it’s <script type="math/tex">O(n^2)</script>.</p>
<p>To avoid this cost, we try to simply never change to coefficient form. This can be done by changing the commitment to <script type="math/tex">f</script> by committing to the evaluations instead of the coefficients:</p>
<script type="math/tex; mode=display">C = f(x_0) g_0 + f(x_1) g_1 + \cdots + f(x_{n-1}) g_{n-1}</script>
<p>This means that our <script type="math/tex">\vec a</script> vector in the IPA is now given by
<script type="math/tex">\vec a = (f(x_0), f(x_1), \ldots, f(x_{n-1}))</script></p>
<p>The <a href="/ethereum/2021/06/18/pcs-multiproofs.html#evaluating-a-polynomial-in-evaluation-form-on-a-point-outside-the-domain">barycentric formula</a> allows us now to compute an IPA to evaluate a polynomial using this new commitment. It says that</p>
<script type="math/tex; mode=display">f(z) = A(z)\sum_{i=0}^{n-1} \frac{f(x_i)}{A'(x_i)} \frac{1}{z-x_i}</script>
<p>If we choose the vector <script type="math/tex">\vec b</script> to be</p>
<script type="math/tex; mode=display">b_i = \frac{A(z)}{A'(x_i)} \frac{1}{z-x_i}</script>
<p>we get that <script type="math/tex">\vec a \cdot \vec b = f(z)</script>, and thus an IPA with this vector can be used to prove the evaluation of a polynomial which is itself in evaluation form. Other than this, the strategy is exactly the same.</p>
<div class="footnotes">
<ol>
<li id="fn:2">
<p>Bootle, Cerulli, Chaidos, Groth, Petit: <a href="https://eprint.iacr.org/2016/263.pdf">Efficient Zero-Knowledge Arguments forArithmetic Circuits in the Discrete Log Setting</a> <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:1">
<p>Bowe, Grigg, Hopwood: <a href="https://eprint.iacr.org/2019/1021.pdf">Recursive Proof Composition without a Trusted setup</a> <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>IntroductionVerkle trie for Eth1 state2021-06-18T22:55:00+00:002021-06-18T22:55:00+00:00https://dankradfeist.de/ethereum/2021/06/18/verkle-trie-for-eth1<h1 id="verkle-trie-for-eth1-state">Verkle trie for Eth1 state</h1>
<p>This post is a quick summary on how verkle tries work and how they can be used in order to make Eth1 stateless. Note that this post is written with the KZG commitment scheme in mind as it is easy to understand and quite popular, but this can easily be replaced by any other “additively homomorphic” commitment scheme, meaning that it should be possible to compute the commitment to the sum of two polynomials by adding the two commitments.</p>
<h2 id="using-kzg-as-a-vector-commitment">Using KZG as a Vector commitment</h2>
<p>The <a href="/ethereum/2020/06/16/kate-polynomial-commitments.html">KZG (Kate) polynomial commitment scheme</a> is a polynomial commitment scheme. Its primary functionality is the ability to commit to a polynomial <script type="math/tex">f(x)</script> via an elliptic curve group element <script type="math/tex">C = [f(s)]_1</script> (for notation see the linked post).
We can then open this commitment at any point <script type="math/tex">z</script> by giving the verifier the value <script type="math/tex">y=f(z)</script> as well as a group element <script type="math/tex">\pi = [(f(s) - y)/(s-z)]_1</script>, and this proof of the correctness of the value <script type="math/tex">y</script> can be checked using a pairing equation.</p>
<p>A vector commitment is a commitment scheme that takes as an input <script type="math/tex">d</script> different values <script type="math/tex">v_0, v_1, \ldots, v_{d-1}</script> and produces a commitment <script type="math/tex">C</script> that can be opened at any of these values. As an example, a Merkle tree is a vector commitment, with the property that opening at the <script type="math/tex">i</script>-th value required <script type="math/tex">\log d</script> hashes as a proof.</p>
<p>Let <script type="math/tex">\omega</script> be a <script type="math/tex">d</script>-th root of unity, i.e. <script type="math/tex">\omega^d=1</script>, and <script type="math/tex">\omega^i \not=1</script> for <script type="math/tex">% <![CDATA[
0\leq i<d %]]></script>.</p>
<p>We can turn the Kate commitment into a vector commitment that allows commiting to a vector of length <script type="math/tex">d</script> by committing to the degree <script type="math/tex">d-1</script> polynomial <script type="math/tex">f(x)</script> that is defined by <sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup></p>
<script type="math/tex; mode=display">% <![CDATA[
f(\omega^i) = v_i \text{ for } 0\leq i<d %]]></script>
<p>To open the commitment <script type="math/tex">C</script> at any point <script type="math/tex">i</script>, we simply have to compute a Kate proof for <script type="math/tex">f(\omega^i)</script>. Fortunately, this proof is constant sized: It does not depend on the width <script type="math/tex">d</script>. Even better, many of these proofs can be combined into a <a href="/ethereum/2021/06/18/pcs-multiproofs.html">single proof</a>, which is much cheaper to verify.</p>
<h2 id="introduction-to-verkle-tries">Introduction to Verkle tries</h2>
<p>Verkle is an amalgamation of “vector” and “Merkle”, due to the fact that they are built in a tree-like structure just like Merkle trees, but at each node, instead of a hash of the <script type="math/tex">d</script> nodes below (<script type="math/tex">d=2</script> for binary Merkle trees), they commit to the <script type="math/tex">d</script> nodes below using a vector commitment. <script type="math/tex">d</script>-ary Merkle trees are inefficient, because each proof has to include all the unaccessed siblings for each node on the path to a leaf. A <script type="math/tex">d</script>-ary Merkle tree thus needs <script type="math/tex">(d - 1) \log_d n = (d - 1) \frac{\log n}{\log d}</script> hashes for a single proof, which is worse than the binary Merkle tree, which only needs <script type="math/tex">\log n</script> hashes. This is because a hash function is a poor vector commitment: a proof requires all siblings to be given.</p>
<p>Better vector commitments change this equation; by using the <a href="/ethereum/2020/06/16/kate-polynomial-commitments.html">KZG polynomial commitment scheme</a> as a vector commitment, each level only requires a constant size proof, so the annoying factor of <script type="math/tex">d-1</script> that kills <script type="math/tex">d</script>-ary Merkle trees disappears.</p>
<p>A verkle trie is a trie where the inner nodes are <script type="math/tex">d</script>-ary vector commitments to their children, where the <script type="math/tex">i</script>-th child contains all nodes with the prefix <script type="math/tex">i</script> as a <script type="math/tex">d</script>-digit binary number. As an example here is a <script type="math/tex">d=16</script> verkle trie with nine nodes inserted:
<img src="/assets/verkle_trie.svg" alt="verkle trie" /></p>
<p>The root of a leaf node is simply a hash of the (key, value) pair of 32 byte strings, whereas the root of an inner node is the hash of the vector commitment (in KZG, this is a <script type="math/tex">G_1</script> element).</p>
<h2 id="verkle-proof-for-a-single-leaf">Verkle proof for a single leaf</h2>
<p>We assume that key and value are known (they have to be provided in any witness scheme). Then for each inner node that the key path crosses, we have to add the commitment to that node to the proof. For example, let’s say we want to prove the leaf <code class="highlighter-rouge">0101 0111 1010 1111 -> 1213</code> in the above example (marked in green), then we have to give the commitment to <code class="highlighter-rouge">Node A</code> and <code class="highlighter-rouge">Node B</code> (both marked in cyan), as the path goes through these nodes. We don’t have to give the <code class="highlighter-rouge">Root</code> itself because it is known to the verifier. The <code class="highlighter-rouge">Root</code> as well as the node itself are marked in green, as they are data that is required for the proof, but is assumed as given and thus is not part of the proof.</p>
<p>Then we need to add a KZG proof for each inner node, that proves that the hash of the child is a correct reveal of the KZG commitment. So the proofs in this example would consist of three KZG evaluation proofs:</p>
<ul>
<li>Proof that the root (hash of key and value) of the node <code class="highlighter-rouge">0101 0111 1010 1111 -> 1213</code> is the evaluation of the commitment of <code class="highlighter-rouge">Inner node B</code> at the index <code class="highlighter-rouge">1010</code></li>
<li>Proof that the root of <code class="highlighter-rouge">Inner node B</code> (hash of the KZG commitment) is the evaluation of the commitment of <code class="highlighter-rouge">Inner node A</code> at the index <code class="highlighter-rouge">0111</code></li>
<li>Proof that the root of <code class="highlighter-rouge">Inner node A</code> (hash of the KZG commitment) is the evaluation of the <code class="highlighter-rouge">Root</code> commitment at the index <code class="highlighter-rouge">0101</code></li>
</ul>
<p>But does that mean we need to add a Kate proof at each level, so that the complete proof will consist of <script type="math/tex">\log_d n - 1</script> elliptic curve group elements for the commitments [Note the -1 because the root is always known and does not have to be included in proofs] and an additional <script type="math/tex">\log_d n</script> group elements for the reveals?</p>
<p>Fortunately, this is not the case. KZG proofs can be compressed using different schemes to a small constant size, so given <em>any</em> number of inner nodes, the proof can be done using a small number of bytes. Even better, given any number of leaves to prove, we only need this small size proof to prove them alltogether! So the amortized cost is only the total size of the commitments of the inner nodes. Pretty amazing.</p>
<p>In practice, we want a scheme that is very efficient to compute and verify, so we use <a href="/ethereum/2021/06/18/pcs-multiproofs.html">this scheme</a>. It is not the smallest in size (but still pretty small at 128 bytes total), however it is very efficient to compute and check.</p>
<h2 id="average-verkle-trie-depth">Average verkle trie depth</h2>
<p>My numerical experiments indicate that the average depth (number of inner nodes on the path) of a verkle tree with <script type="math/tex">n</script> random keys inserted is <script type="math/tex">\log_d n + 2/3</script>.</p>
<p>For <script type="math/tex">n=2^{30}</script> and <script type="math/tex">d=2^{10}</script>, this results in an average trie depth of ca. <script type="math/tex">3.67</script>.</p>
<h2 id="attack-verkle-trie-depth">Attack verkle trie depth</h2>
<p>An attacker can attempt to fill up the siblings of an attacked key in order to lengthen the proof path. They only need to insert one key per level in order to maximise the proof size; for this, however, they have to be able to find hash prefix collisions. Currently it is possible to find prefix collisions of up to 80 bits, indicateing that with <script type="math/tex">d=2^{10}</script>, up to 8 levels of collisions can be provoked. Note that this is only about twice longer compared to the average expected depth for <script type="math/tex">n=2^{30}</script> keys, so the attack doesn’t do very much overall.</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>Note that we could use <script type="math/tex">f(i) = v_i</script> instead, which would seem more intuitive, but this convention allows the use of Fast Fourier Transforms in computing all the polynomials, which is much more efficient. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>Verkle trie for Eth1 statePCS multiproofs using random evaluation2021-06-18T22:43:00+00:002021-06-18T22:43:00+00:00https://dankradfeist.de/ethereum/2021/06/18/pcs-multiproofs<h1 id="multiproof-scheme-for-polynomial-commitments">Multiproof scheme for polynomial commitments</h1>
<p>This post describes a multiproof scheme for (additively homomorphic) polynomial commitment schemes. It is very efficient to verify (the dominant operation for the verifier being one multiexponentiation), as well as efficient to compute as long as all the polynomials are fully available (it is not suitable in the case where the aggregations has to be done from only proofs without access to the data).</p>
<p>It is thus very powerful in the setting of verkle tries for the purpos of implementing <a href="/ethereum/2021/02/14/why-stateless.html">weak statelessness</a>.</p>
<p>Note that this post was written with the <a href="/ethereum/2020/06/16/kate-polynomial-commitments.html">KZG commitment scheme</a> in mind. It does however work for any “additively homomorphic” scheme, where it is possible to add two commiments together to get the commitment to the sum of the two polynomials. This means it can also be applied to <a href="/ethereum/2021/07/27/inner-product-arguments.html">Inner Product Arguments</a> (the core argument behind bulletproofs) and is actually a very powerful aggregation scheme for this use case.</p>
<h1 id="verkle-multiproofs-using-random-evaluation">Verkle multiproofs using random evaluation</h1>
<p>Problem: In a verkle tree of width <script type="math/tex">d</script>, we want to provide all the intermediate KZG (“Kate”) proofs as efficiently as possible.</p>
<p>We need to provide all intermediate commitments, there is no way around that in Verkle trees, but we only need a single KZG proof in the optimal case. There are efficient multiverification techniques for KZG if all proofs are given to the verifier, but we want to do with just a small constant number of proofs.</p>
<p>For the notation used here, please check my post on <a href="/ethereum/2020/06/16/kate-polynomial-commitments.html">KZG commitments</a>.</p>
<p>Please see <a href="/ethereum/2021/06/18/verkle-trie-for-eth1.html">here</a> for an introduction to verkle tries.</p>
<h2 id="connection-to-verkle-tries">Connection to verkle tries</h2>
<p>Quick recap: looking at this verkle trie:
<img src="/assets/verkle_trie.svg" alt="verkle trie" /></p>
<p>In order to prove the leaf value <code class="highlighter-rouge">0101 0111 1010 1111 -> 1213</code> we have to give the commitment to <code class="highlighter-rouge">Node A</code> and <code class="highlighter-rouge">Node B</code> (both marked in cyan), as well as the following KZG proofs:</p>
<ul>
<li>Proof that the root (hash of key and value) of the node <code class="highlighter-rouge">0101 0111 1010 1111 -> 1213</code> is the evaluation of the commitment of <code class="highlighter-rouge">Inner node B</code> at the index <code class="highlighter-rouge">1010</code></li>
<li>Proof that the root of <code class="highlighter-rouge">Inner node B</code> (hash of the KZG commitment) is the evaluation of the commitment of <code class="highlighter-rouge">Inner node A</code> at the index <code class="highlighter-rouge">0111</code></li>
<li>Proof that the root of <code class="highlighter-rouge">Inner node A</code> (hash of the KZG commitment) is the evaluation of the <code class="highlighter-rouge">Root</code> commitment at the index <code class="highlighter-rouge">0101</code></li>
</ul>
<p>Each of these commitments, let’s call them <script type="math/tex">C_0</script> (<code class="highlighter-rouge">Inner node B</code>), <script type="math/tex">C_1</script> (<code class="highlighter-rouge">Inner node A</code>) and <script type="math/tex">C_2</script> (<code class="highlighter-rouge">Root</code>) is to a polynomial function <script type="math/tex">f_i(X)</script>, What we are really saying by the claim that the commitment <script type="math/tex">C_i</script> evaluates to some <script type="math/tex">y_i</script> at index <script type="math/tex">z_i</script> is that <em>the function committed to</em> by <script type="math/tex">C_i</script>, i.e. <script type="math/tex">f_i(X)</script>, evaluates to <script type="math/tex">y_i</script> at <script type="math/tex">z_i</script>, i.e. <script type="math/tex">f_i(z_i) = y_i</script>, So what we need to prove is</p>
<ul>
<li><script type="math/tex">f_0(\omega^{0b1010}) = H(0101\ 0111\ 1010\ 1111, 1213)</script> (hash of key and value), where <script type="math/tex">C_0 = [f_0(s)]_1</script>, i.e. <script type="math/tex">C_0</script> is the commitment to <script type="math/tex">f_0(X)</script></li>
<li><script type="math/tex">f_1(\omega^{0b0111}) = H(C_0)</script>, where <script type="math/tex">C_1 = [f_1(s)]_1</script></li>
<li><script type="math/tex">f_2(\omega^{0b0101}) = H(C_1)</script>, where <script type="math/tex">C_2 = [f_2(s)]_1</script></li>
</ul>
<p>Note that we replaced the index with <script type="math/tex">z_i = \omega^{\text{the index}}</script>, where <script type="math/tex">\omega</script> is a <script type="math/tex">d</script>-th root of unity which makes many operations more efficient in practice (we will explain below why). <script type="math/tex">H</script> stands for a collision-resistant hash function, for example <code class="highlighter-rouge">sha256</code>.</p>
<p>If we have a node deeper inside the trie (more inner nodes on the path), there will be more proofs to provide. Also, if we do a multiproof, where we provide the proof for multiple key/value pairs at the same time, the list of proofs will be even longer. Overall, we can end up with hundreds or thousands of evaluations of the form <script type="math/tex">f_i(z_i) = y_i</script> to prove, where we have the commitments <script type="math/tex">C_i = [f_i(s)]_1</script> (these are part of the verkle proof as well).</p>
<h2 id="relation-to-prove">Relation to prove</h2>
<p>The central part of a verkle multiproof (a verkle proof that proves many leaves at the same time) is to prove the following relation:</p>
<p>Given <script type="math/tex">m</script> KZG commitments <script type="math/tex">C_0 = [f_0(s)]_1, \ldots, C_{m-1}=[f_{m-1}(s)]_1</script>, prove evaluations</p>
<script type="math/tex; mode=display">f_0(z_0)=y_0 \\
\vdots\\
f_{m-1}(z_{m-1})=y_{m-1}</script>
<p>where <script type="math/tex">z_i \in \{\omega^0, \ldots, \omega^{d-1}\}</script>, and <script type="math/tex">\omega</script> is a <script type="math/tex">d</script>-th root of unity.</p>
<h2 id="proof">Proof</h2>
<ol>
<li>
<p>Let <script type="math/tex">r \leftarrow H(C_0, \ldots, C_{m-1}, y_0, \ldots, y_{m-1}, z_0, \ldots, z_{m-1})</script> (<script type="math/tex">H</script> is a hash function)
The prover computes the polynomial</p>
<script type="math/tex; mode=display">g(X) = r^0 \frac{f_0(X) - y_0}{X-z_0} + r^1 \frac{f_1(X) - y_1}{X-z_1} + \ldots +r^{m-1} \frac{f_{m-1}(X) - y_{m-1}}{X-z_{m-1}}</script>
<p>If we can prove that <script type="math/tex">g(X)</script> is actually a polynomial (and not a rational function), then it means that all the quotients are exact divisions, and thus the proof is complete. This is because it is a random linear combination of the quotients: if we just added the quotients, it could be that two of them just “cancel out” their remainders to give a polynomial. But because <script type="math/tex">r</script> is chosen after all the inputs are fixed (see <a href="https://en.wikipedia.org/wiki/Fiat%E2%80%93Shamir_heuristic">Fiat-Shamir heuristic</a>), it is computationally impossible for the prover to find inputs such that two of the remainders cancel.</p>
<p>Everything else revolves around proving that <script type="math/tex">g(X)</script> is a polynomial (and not a rational function) with minimal effort for the prover and verifier.</p>
<p>Note that any function that we can commit to via a KZG commitment is a polynomial. So the prover computes and sends the commitment <script type="math/tex">D = [g(s)]_1</script>, Now we only need to convince the verifier that <script type="math/tex">D</script> is, indeed, a commitment to the function <script type="math/tex">g(X)</script>, This is what the following steps are about.</p>
</li>
<li>
<p>We will prove the correctness of <script type="math/tex">D</script> by (1) evaluating it at a completely random point <script type="math/tex">t</script> and (2) helping the verifier check that the evaluation is indeed <script type="math/tex">g(t)</script>,
Let <script type="math/tex">t \leftarrow H(r, D)</script>,
We will evaluate <script type="math/tex">g(t)</script> and help the verifier evaluate the equation</p>
<script type="math/tex; mode=display">g(t) = \sum_{i=0}^{m-1}r^i \frac{f_i(t) - y_i}{t-z_i}</script>
<p>with the help of the prover. Note that we can split this up into two sums</p>
<script type="math/tex; mode=display">g(t) = \underbrace{\sum_{i=0}^{m-1} r^i \frac{f_i(t)}{t-z_i}}_{g_1(t)} - \underbrace{\sum_{i=0}^{m-1} r^i \frac{y_i}{t-z_i}}_{g_2(t)}</script>
<p>The second sum term <script type="math/tex">g_2(t)</script> is completely known to the verifier and can be computed using a small number of field operations. The first term can be computed by giving an opening to the commitment</p>
<script type="math/tex; mode=display">E = \sum_{i=0}^{m-1} \frac{r^i}{t-z_i} C_i</script>
<p>at <script type="math/tex">t</script>, Note that the commitment <script type="math/tex">E</script> itself can be computed by the verifier using a multiexponentiation (this will be the main part of the verifier work), because they have all the necessary inputs.</p>
<p>The prover computes</p>
<script type="math/tex; mode=display">h(X) = \sum_{i=0}^{m-1} r^i \frac{f_i(X)}{t-z_i}</script>
<p>which satisfies <script type="math/tex">E = [h(s)]_1</script>,</p>
</li>
<li>
<p>Let <script type="math/tex">f_D(X)</script> denote the polynomial committed to by <script type="math/tex">D</script> – if the prover is honest, then this will be <script type="math/tex">g(X)</script>, however this is what the verifier wants to check. Due to the binding property there can be at most one polynomial that the prover can open <script type="math/tex">D</script> at, which makes this valid.</p>
<p>What remans to be checked for the verifier to conclude the proof is that</p>
<script type="math/tex; mode=display">f_D (t) = h(t) - g_2(t)</script>
<p>or, reordering this:</p>
<script type="math/tex; mode=display">g_2(t) = h(t) - f_D(t)</script>
<p>The verifier can compute the left hand side <script type="math/tex">y=g_2(t)</script> without the help of the prover. What remains is for the prover to give an opening to the commitment <script type="math/tex">E - D</script> at <script type="math/tex">t</script> to prove that it is equal to <script type="math/tex">y</script>. The KZG proof <script type="math/tex">\pi = [(h(s) - g(s) - y)/(s-t)]_1</script> verifies that this is the case.</p>
<p>The proof consists of <script type="math/tex">D</script> and <script type="math/tex">\pi</script>.</p>
</li>
</ol>
<h2 id="verification">Verification</h2>
<p>The verifier starts by computing <script type="math/tex">r</script> and <script type="math/tex">t</script>,</p>
<p>As we have seen above, the verifier can compute the commitment <script type="math/tex">E</script> (using one multiexponentiation) and the field element <script type="math/tex">g_2(t)</script></p>
<p>Then the verifier computes</p>
<script type="math/tex; mode=display">y = g_2(t)</script>
<p>The verifier checks the Kate opening proof</p>
<script type="math/tex; mode=display">e(E - D - [y]_1,[1]_2) = e(\pi, [s-t]_2)</script>
<p>This means that the verifier now knows that the commitment <script type="math/tex">D</script>, opened at a completely random point (that the prover didn’t know when they committed to it), has exactly the value of <script type="math/tex">g(t)</script> which the verifier computed with the help of a prover. According to the <a href="https://en.wikipedia.org/wiki/Schwartz%E2%80%93Zippel_lemma">Schwartz-Zippel lemma</a> this is extremely unlikely (read: impossible in practice, like finding a hash collision), unless <script type="math/tex">D</script> is actually a commitment to <script type="math/tex">g(X)</script>; thus, <script type="math/tex">g(X)</script> must be a polynomial and the proof is complete.</p>
<h1 id="optimization-do-everything-in-evaluation-form">Optimization: Do everything in evaluation form</h1>
<p>This section is to explain various optimizations which make all of the above easy to compute, it is not essential for understanding the correctness of the proof (but it is for making an efficient implementation). The great advantage of the above version of KZG multiproofs, compared to many others, is that very large parts of the prover and verifier work only need to be done in the field. In addition, all these operations can be done on the evaluation form of the polynomial (or in maths terms: in the “Lagrange basis”). What that means and how it is used is explained below.</p>
<h2 id="evaluation-form">Evaluation form</h2>
<p>Usually, we see a polynomial as a sequence of coefficients <script type="math/tex">c_0, c_1, \ldots</script> defining a polynomial function <script type="math/tex">f(X) = \sum_i c_i X^i</script>, Here we define another way to look at polynomials: the so-called “evaluation form”.</p>
<p>Given <script type="math/tex">d</script> points <script type="math/tex">(\omega^0, y_0), \ldots, (\omega^{d-1}, y_{d-1})</script>, there is always a unique polynomial <script type="math/tex">f</script> of degree <script type="math/tex">% <![CDATA[
<d %]]></script> that assumes all these points, i.e. <script type="math/tex">f(\omega^i) = y_i</script> for all <script type="math/tex">% <![CDATA[
0 \leq i < d %]]></script>, Conversely, given a polynomial, we can easily compute the evaluations at the <script type="math/tex">d</script> roots of unity. We thus have a one-to-one correspondence of</p>
<script type="math/tex; mode=display">% <![CDATA[
\{\text{all polynomials of degree }<d\} \leftrightarrow \{\text{vectors of length } d \text{, seen as evaluations of a polynomial at } \omega^i\} %]]></script>
<p>This can be seen as a “change of basis”: On the left, the basis is “the coefficients of the polynomial”, whereas on the right, it’s “the evaluations of the polynomial on the <script type="math/tex">\omega^i</script>”.</p>
<p>Often, the evaluation form is more natural: For example, when we want to use KZG as a vector commitment, we will commit to a vector <script type="math/tex">(y_0, \ldots, y_{d-1})</script> by committing to a function that is defined by <script type="math/tex">f(\omega^i) = y_i</script>, But there are more advantages to the evaluation form: Some operations, such as multiplying two polynomials or dividing them (if the division is exact) are much more efficient in evaluation form.</p>
<p>In fact, all the operations in the KZG multiproof above can be done very efficiently in evaluation form, and in practice we never even compute the polynomials in coefficient form when we do this!</p>
<h2 id="lagrange-polynomials">Lagrange polynomials</h2>
<p>Let’s define the Lagrange polynomials on the domain <script type="math/tex">x_0, \ldots, x_{d-1}</script>.</p>
<script type="math/tex; mode=display">\ell_i(X) = \prod_{j \not= i} \frac{X - x_j}{x_i - x_j}</script>
<p>For any <script type="math/tex">x \in {x_0, \ldots, x_{d-1}}</script>,</p>
<script type="math/tex; mode=display">% <![CDATA[
\ell_i(x) = \begin{cases}
1 & \text{if } x=x_i \\
0 & \text{otherwise}
\end{cases} %]]></script>
<p>so the Lagrange polynomials can be seen as the “unit vectors” for polynomials in evaluation form. Using these, we can explicitly translate from the evaluation form to the coefficient form: say we’re given <script type="math/tex">(y_0, \ldots, y_{d-1})</script> as a polynomial in evaluation form, then the polynomial is</p>
<script type="math/tex; mode=display">f(X) = \sum_{i=0}^{d-1} y_i \ell_i(X)</script>
<p>Polynomials in evaluation form (given by the <script type="math/tex">y_i</script>) are sometimes called “Polynomials in Lagrange basis” because of this.</p>
<p>For KZG commitments, we can use another trick: Recall that the <script type="math/tex">G_1</script> setup for KZG to commit to polynomials of degree <script type="math/tex">% <![CDATA[
<k %]]></script> consists of <script type="math/tex">% <![CDATA[
[s^i]_1, 0\leq i < d %]]></script>, From these, we can compute <script type="math/tex">% <![CDATA[
[\ell_i(s)]_1, 0\leq i < d %]]></script>, Then we can simply compute a polynomial commitment like this:</p>
<script type="math/tex; mode=display">[f(s)]_1 = \sum_{i=0}^{d-1} y_i [\ell_i(s)]_1</script>
<p>There is no need to compute the polynomial in coefficient form to compute its KZG commitment.</p>
<h2 id="fft-to-change-between-evaluation-and-coefficient-form">FFT to change between evaluation and coefficient form</h2>
<p>The Discrete Fourier Transform <script type="math/tex">u=\mathrm{DFT}(v)</script> of a vector <script type="math/tex">v</script> is defined by</p>
<script type="math/tex; mode=display">u_i = \sum_{j=0}^{d-1} v_j \omega^{ij}</script>
<p>Note that if we define the polynomial <script type="math/tex">f(X) = \sum_{j=0}^{k-1} v_j X^j</script>, then <script type="math/tex">u_i = f(\omega^i)</script>, i.e. the DFT computes the values of <script type="math/tex">f(X)</script> on the domain <script type="math/tex">% <![CDATA[
\omega^i, 0 \leq i < d %]]></script>, This is why in practice, we will the roots of unity as our domain whenever it is available because then we can use the DFT to compute the evaluation form from the coefficient form.</p>
<p>The inverse, the Inverse Discrete Fourier Transform <script type="math/tex">v_i = \mathrm{DFT}^{-1}(u_i)</script>, is given by</p>
<script type="math/tex; mode=display">v_i = \frac{1}{d}\sum_{j=0}^{d-1} u_j \omega^{-ij}</script>
<p>Similar to how the DFT computes the evaluations of a polynomial in coefficient form, the inverse DFT computes the coefficients of a polynomial from its evaluations.</p>
<p>To summarize:</p>
<script type="math/tex; mode=display">\text{coefficient form} \overset{\mathrm{DFT}}{\underset{\mathrm{DFT}^{-1}}\rightleftarrows} \text{evaluation form}</script>
<p>The “Fast Fourier Transform” is a fast algorithm, that can compute the DFT or inverse DFT in only <script type="math/tex">\frac{d}{2} \log d</script> multiplications. A direct implementation of the sum above would take <script type="math/tex">d^2</script> multiplications. This speedup is huge and makes the FFT such a powerful tool.</p>
<p>In strict usage, DFT is the generic name for the operation, whereas FFT is an algorithm to implement it (similar to sorting being an operation, whereas quicksort is one possible algorithm to implement that operation). However, colloquially, people very often just speak of FFT even when they mean the operation as well as the algorithm.</p>
<h2 id="multiplying-and-dividing-polynomials">Multiplying and dividing polynomials</h2>
<p>Let’s say we have two polynomials <script type="math/tex">f(X)</script> and <script type="math/tex">g(X)</script> such that the sum of the degrees is less than <script type="math/tex">k</script>, Then the product <script type="math/tex">h(X) = f(X) \cdot g(X)</script> is a polynomial of degree less than <script type="math/tex">k</script>. If we have the evaluations <script type="math/tex">f_i = f(x_i)</script> and <script type="math/tex">g_i = g(x_i)</script>, then we can easily compute the evaluations of the product:</p>
<script type="math/tex; mode=display">h_i = h(x_i) = f(x_i) g(x_i) = f_i g_i</script>
<p>This only needs <script type="math/tex">d</script> multiplications, whereas multiplying in coefficient form needs <script type="math/tex">O(d^2)</script> multiplications. So multiplying two polynomials is much easier in evaluation form.</p>
<p>Now lets assume that <script type="math/tex">g(X)</script> divides <script type="math/tex">f(X)</script> exactly, i.e. there is a polynomial <script type="math/tex">q(X)</script> such that <script type="math/tex">f(X) = g(X) \cdot q(X)</script>, Then we can find this quotient <script type="math/tex">q(X)</script> in evaluation form</p>
<script type="math/tex; mode=display">q_i = q(x_i) = f(x_i) / g(x_i) = f_i / g_i</script>
<p>using only <script type="math/tex">d</script> divisions. Again, using long division, this would be a much more difficult task taking <script type="math/tex">O(d^2)</script> operations in coefficient form.</p>
<p>We can use this trick to compute openings for Kate commitments in evaluation form, where we need to compute a polynomial quotient. So we can use this trick to compute the proof <script type="math/tex">\pi</script> above.</p>
<h2 id="dividing-when-one-of-the-points-is-zero">Dividing when one of the points is zero</h2>
<p>There is only one problem: What if one of the <script type="math/tex">g_i</script> is zero, i.e. <script type="math/tex">g(X)</script> is zero somewhere on our evaluation domain? Note by definition this can only happen if <script type="math/tex">f_i</script> is also zero, otherwise <script type="math/tex">g(X)</script> cannot divide <script type="math/tex">f(X)</script>, But if both are zero, then we are left with <script type="math/tex">q_i = 0/0</script> and can’t directly compute it in evaluation form. But do we have to go back to coefficient form and use long division? It turns out that there’s a trick to avoid this, at least in the case that we care about: Often, <script type="math/tex">g(X) = X-x_m</script> is a linear factor. We want to compute</p>
<script type="math/tex; mode=display">q(X) = \frac{f(X)}{g(X)} = \frac{f(X)}{X-x_m} = \sum_{i=0}^{d-1} f_i \frac{\ell_i(X)}{X-x_m}</script>
<p>Now, we introduce the polynomial <script type="math/tex">A(X) = \prod_{i=0}^{d-1} (X-x_i)</script>. The roots of the polynomial are all the points of the domain. So we can also write <script type="math/tex">A(X)</script> in another form as</p>
<p>The <a href="https://en.wikipedia.org/wiki/Formal_derivative">formal derivative</a> of <script type="math/tex">A</script> is given by</p>
<script type="math/tex; mode=display">A'(X) = \sum_{j=0}^{d-1} \prod_{i \not= j}(X-x_i)</script>
<p>This polynomial is extremely useful because we can write the Lagrange polynomials as</p>
<script type="math/tex; mode=display">\ell_i(X) = \frac{1}{A'(x_i)}\frac{A(X)}{X-x_i}</script>
<p>so</p>
<script type="math/tex; mode=display">\frac{\ell_i(X)}{X-x_m} = \frac{1}{A'(x_i)}\frac{A(X)}{(X-x_i)(X-x_m)} = \frac{A'(x_m)}{A'(x_i)}\frac{\ell_m(X)}{X - x_i}</script>
<p>Now, let’s go back to the equation for <script type="math/tex">q(X)</script>, The one problem we have if we want to get this in evaluation form is the point <script type="math/tex">q(x_m)</script> where we encounter a division by zero; all the other points are easy to compute. But now we can replace</p>
<script type="math/tex; mode=display">q(X) = \sum_{i=0}^{d-1} f_i \frac{\ell_i(X)}{X-x_m} = \sum_{i=0}^{d-1} f_i \frac{A'(x_m)}{A'(x_i)}\frac{\ell_m(X)}{X - x_i}</script>
<p>Because <script type="math/tex">\ell_m(x_m)=1</script>, this lets us compute</p>
<script type="math/tex; mode=display">q_m = q(x_m) = \sum_{i=0}^{d-1} f_i \frac{A'(x_m)}{A'(x_i)}\frac{1}{x_m - x_i}</script>
<p>For all <script type="math/tex">j \not= m</script>, we can compute directly</p>
<script type="math/tex; mode=display">q_j = q(x_j) = \sum_{i=0}^{d-1} f_i \frac{\ell_i(x_j)}{x_j-x_m} = \frac{f_j}{x_j-x_m}</script>
<p>This allows us to efficiently allow all <script type="math/tex">q_j</script> in evaluation form for all <script type="math/tex">j</script>, including <script type="math/tex">j=m</script>, This trick is necessary to compute <script type="math/tex">q(X)</script> in evaluation form.</p>
<p>In order to make this efficient, it’s best to precompute the <script type="math/tex">A'(x_i)</script> as computing them takes <script type="math/tex">O(d^2)</script> time, but only needs to be performed once.</p>
<h3 id="special-case-roots-of-unity">Special case roots of unity</h3>
<p>In the case where we are using the roots of unity as our domain, we can use some tricks so that we don’t need the precomputation of <script type="math/tex">A'(x_i)</script>. The key observation is that <script type="math/tex">A(X)</script> can be rewritten in a simpler form:</p>
<script type="math/tex; mode=display">A(X) = \prod_{i = 0} ^ {d-1} (X-\omega^i) = X^d - 1</script>
<p>Because of this the formal derivative becomes much simpler:</p>
<script type="math/tex; mode=display">A'(X) = d x^{d-1} = \sum_{j=0}^{d-1} \prod_{i \not= j}(X-\omega^i)</script>
<p>And we can now easily derive <script type="math/tex">A'(x_i)</script>:</p>
<script type="math/tex; mode=display">A'(\omega^i) = d (\omega^i)^{d-1}= d \omega^{-i}</script>
<h2 id="evaluating-a-polynomial-in-evaluation-form-on-a-point-outside-the-domain">Evaluating a polynomial in evaluation form on a point outside the domain</h2>
<p>Now there is one thing that we can do with a polynomial in coefficient form, that does not appear to be easily feasible in evaluation form: We can evaluate it at any point. Yes, in evaluation form, we do have the values at the <script type="math/tex">x_i</script>, so we can evaluate <script type="math/tex">f(x_i)</script> by just taking an item from the vector; but surely, to evaluate <script type="math/tex">f(X)</script> at a point <script type="math/tex">z</script> <em>outside</em> the domain, we have to first convert to coefficient form?</p>
<p>It turns out, it is not necessary. To the rescue comes the so-called barycentric formula. Here is how to derive it using Lagrange interpolation:</p>
<script type="math/tex; mode=display">f(z) = \sum_{i=0}^{d-1} f_i \ell_i(z) = \sum_{i=0}^{d-1} f_i \frac{1}{A'(x_i)}\frac{A(z)}{z-x_i} = A(z)\sum_{i=0}^{d-1} \frac{f_i}{A'(x_i)} \frac{1}{z-x_i}</script>
<p>The last part can be computed in just <script type="math/tex">O(d)</script> steps (assuming the precomputation of the <script type="math/tex">A'(x_i)</script>), which makes this formula very useful, for example for computing <script type="math/tex">g(t)</script> and <script type="math/tex">h(t)</script> without changing into coefficient form.</p>
<p>This formula can be simplified in the case where the domain is the roots of unity:</p>
<script type="math/tex; mode=display">f(z) = \frac{z^d-1}{d}\sum_{i=0}^{d-1} f_i \frac{\omega^i}{z-\omega^i}</script>Multiproof scheme for polynomial commitmentsWhat everyone gets wrong about 51% attacks2021-05-20T22:38:00+00:002021-05-20T22:38:00+00:00https://dankradfeist.de/ethereum/2021/05/20/what-everyone-gets-wrong-about-51percent-attacks<h1 id="what-everyone-gets-wrong-about-51-attacks">What everyone gets wrong about 51% attacks</h1>
<p>Excuse the provocation in the title. Clearly not everyone gets it wrong. But sufficiently many people that I think it’s good to write a blog post about the topic.</p>
<p>There is a myth out there that if you control more than 50% of the hashpower in Bitcoin, Ethereum, or another blockchain, then you can do whatever you want with the network. A similar restatement for Proof of Stake is that if you control more than two thirds of the stake, you can do anything. You can take another person’s coins. You can print new coins. Anything.</p>
<p>This is <strong>not</strong> true. Let’s discuss what a 51% attack can do:</p>
<ul>
<li>They can stop you from using the chain, i.e. block any transaction they don’t like. This is called censorship</li>
<li>They can revert the chain, i.e. undo a certain number of blocks and change the order of the transactions in them.</li>
</ul>
<p>What they <strong>cannot</strong> do is change the rules of the system. This means for example:</p>
<ul>
<li>They cannot simply print new coins, outside of the provisions of the blockchain system; e.g. Bitcoin currently gives each new block producer 6.25 BTC; they cannot simply turn this into one million BTC</li>
<li>They cannot spend coins from an address for which they don’t have the private key</li>
<li>They cannot make larger blocks than consensus rules allow them to do</li>
</ul>
<p>Now this is not to say that 51% attacks aren’t devastating. They are still very bad attacks. Reordering allows double spending of coins, which is quite a big problem. But there are limits on what they can do.</p>
<p>Now how do most Blockchains, including Bitcoin and Ethereum, ensure this? What happens if a miner mines a block that goes against the rules? Or a majority of the stake signs a block that goes against the rules?</p>
<h2 id="the-blockchain-security-model">The blockchain security model</h2>
<p>Sometimes people claim that the longest chain is the valid Bitcoin or Ethereum chain. This is somewhat incomplete. The proper definition of the current chain head is</p>
<ul>
<li>The <strong>valid</strong> chain with the highest total difficulty.</li>
</ul>
<p>So there are two properties that a client verifies before accepting that a chain should be used to represent the current history:</p>
<ol>
<li>It has to be valid. This means that all state transitions are valid; for example in Bitcoin, that means that all transactions only spent previously unspent transaction outputs, the coinbase only receives the transaction fees and block rewards, etc.</li>
<li>It has to be the chain with the highest difficulty. Colloquially, that’s the longest chain, however not measured in terms of blocks but how much total mining power was spent on this chain.</li>
</ol>
<p>This may all sound a bit abstract. It is legitimate to ask who verifies that first condition, that all blocks on the chain should be valid? Because if it’s just the miners that also verify that the chain is valid, then this is a tautology and we haven’t really gained anything.</p>
<p>But blockchains are different. Let’s see why. Start with a normal client/server database architecture:</p>
<p><img src="/assets/database_diagram.png" alt="Database user and server" /></p>
<p>Note that for a typical database, the user trusts the database server. They don’t check that the response is correct; the client makes sure that it is validly formatted according to the protocol, and that’s it. The client, here represented by an empty square, is “dumb”: It can’t verify anything.</p>
<p>A blockchain architecture however, looks like this:</p>
<p><img src="/assets/blockchain_diagram.png" alt="Blockchain architecture" /></p>
<p>So let’s summarise what happens here. There are miners (or stakers) that produce the chain. There is a peer to peer network – its role is to make sure that a valid chain is always available to everyone, even if some of the nodes aren’t honest (you need to be connected to at least one honest and well-connected P2P node, to ensure that you will always be up to date with the valid chain). And there is a client, who sends transactions to the P2P network and receives the latest chain updates (or the full chain, if they are syncing) from other nodes in the network. They are actually part of the network and will also contribute by forwarding blocks and transactions, but that’s not so important here.</p>
<p>The important part is that the user is running a full node, as represented by the cylinder in their client. Whenever the client gets a new block, just like any other node, whether it’s a miner or just a node in the P2P network, they will validate whether that block is a valid state transition.</p>
<p>And if it’s not a valid state transition, the block will just be ignored. That’s why there is very little point in a network for miners to ever try to mine an invalid state transition. Everyone would just ignore it.</p>
<p>Many users run their own node to interact with blockchains like Ethereum or Bitcoin. Many communities have made this part of their culture and place a great emphasis on everyone running their own node, so that they are part of the validation process. Indeed, you could say that it’s really important that the majority of users, especially those with a lot of value at stake, run full nodes; if the majority of users become too lazy, then suddenly miners can be tempted to produce invalid blocks, and this model would not hold anymore.</p>
<h2 id="analogy-separation-of-powers">Analogy: Separation of powers</h2>
<p>You can think of this a bit like the separation of powers in liberal democracies – there are different branches of the government, and just because you have a majority in one of them (say, the legislation) does not mean you can simply do anything you like and ignore all laws. Miners or stakers have the power to order transactions in blockchains; they don’t have the power to simply dictate new rules on the community.</p>
<h2 id="but-do-all-blockchains-work-like-this">But do all blockchains work like this?</h2>
<p>That’s a good question. And what’s important to note is that this only works if a full node is easy to run. As an average user, you will simply not do it if it means having to buy another computer for 5,000$ and needing 1 GBit/s of internet connection permanently. Even if you can get such a connection in some places, having it permanently clogged by your blockchain node is probably not very convenient. In this case, you will probably not run your own node (unless your transactions are exceptionally valuable), which means that you will trust someone else to do it for you.</p>
<p>Imagine a chain that is so expensive to run that only stakers and exchanges will run a full node. You have just changed the trust model, and a majority of stakers and the exchanges could come together and change the rules. There would be no debate with the users about this – users cannot lead a fork if they literally have no control over the chain, at all. They could insist on the old rules, but unless they start running full nodes, they would have no idea if their requests are answered using a chain that satisfies the rules that they want.</p>
<p>That’s why there are always huge debates around increasing the block size of say, Ethereum or Bitcoin – everytime you do this, you increase the burden for people running their own nodes. It’s not much of a problem for miners – the cost of running a node is tiny compared to actual mining operations – so it shifts the balance of power away from users and to the miners (or stakers).</p>
<h2 id="how-about-light-clients">How about light clients?</h2>
<p>All right, but what if you just want to pay for your coffee using cryptocurrencies? Are you going to run a full node on your phone?</p>
<p>Of course, nobody expects that. And users don’t. Here, light clients come into play. Light clients are simpler clients that do not verify the full chain – they only verify the consensus, i.e. the total difficulty or the amount of stake that has voted for it.</p>
<p>In other words, light clients <em>can</em> be tricked into following a chain that contains invalid blocks. There are remedies for this, in the form of data availability checks and fraud proofs. As far as I know, no chain has implemented these at this point, but at least Ethereum will do this in the future.</p>
<p>So using light clients with data availability checks and fraud proofs, we will be able to make the blockchain security model available without requiring all users to run a full node. This is the ultimate goal, that any phone can easily run an Ethereum light client.</p>
<h2 id="and-what-about-sidechains">And what about sidechains?</h2>
<p>Sidechains are a hot topic right now. It would seem that they are an easy way to provide scaling, without the complexity of rollups. Simply speaking</p>
<ul>
<li>Create a new Proof of Stake chain</li>
<li>Create a two-way bridge with Ethereum</li>
<li>…</li>
<li>Profit!
Note that the security of the sidechain relies pretty much entirely on the bridge – that is the construction that allows one chain to understand another chain’s state. After all, if you can trick the bridge on the main chain that all the assets on the bridged chain now belong to Mr. Evil, then it doesn’t matter if full nodes on the Proof of Stake chain think differently. So it’s all in the bridge.</li>
</ul>
<p>Unfortunately, the state of bridges is the same as with light clients. They don’t verify correctness, but only the majority part of the consensus condition. However, there are two things that are worse than light clients</p>
<ol>
<li>Bridges are used for very high value transactions, where most users would choose a full node if they could</li>
<li>Unfortunately, there is no way to fortify bridges as we can do for light clients – the reason is that they cannot verify data availability checks</li>
</ol>
<p>The second point is quite subtle and could easily fill another blog post or two. But in short, bridges cannot do data availability checks, and without these, fraud proofs are also mostly useless. Using zero knowledge proofs, you can get an improvement by requiring bridges to include proofs of all blocks being correct – unfortunately, this still suffers from some data availability attacks, but it is an improvement.</p>
<p>In summary, sidechains have a different, much weaker security model than a blockchain like Ethereum and Bitcoin. They cannot protect against invalid state transitions.</p>
<h2 id="does-this-all-have-to-do-something-with-sharding">Does this all have to do something with sharding?</h2>
<p>In fact, all of this has a lot to do with sharding. The reason why we need sharding to scale is because it is the only way to scale without raising the bar for running a full node, while maintaining the full security guarantees of blockchains as closely as possible.</p>
<h2 id="but-what-if-you-just-undo-all-of-history-then-you-can-still-just-steal-all-the-bitcoinetheretc">But what if you just undo all of history? Then you can still just steal all the Bitcoin/Ether/etc.</h2>
<p>From a theoretical point of view, on a non-checkpointed Proof of Work chain, it is true that by not reverting some transactions, but all transactions ever, you could still get all the Bitcoins. OK, so you cannot print a trillion Bitcoin, but you can still get all the Bitcoins in existence, so that’s pretty good, right?</p>
<p>I think this point is very theoretical. The probability that either of these communities would accept a fork that revises years (or even just hours) of its history is precisely zero. There would be massive scrambling together on all possible channels, with the pretty quick conclusion that people should reject this and just agree that the valid chain should be the one that is already in existence.</p>
<p>With Proof of Stake and finalization, this mechanism will become formalized – clients simply never revert on finalized blocks, ever.</p>What everyone gets wrong about 51% attacksWhy it’s so important to go stateless2021-02-14T21:20:00+00:002021-02-14T21:20:00+00:00https://dankradfeist.de/ethereum/2021/02/14/why-stateless<p>One of Eth1’s biggest problem is the current state size. Estimated at around 10-100GB (depending on how exactly it is stored), it is impractical for many nodes to keep in working memory, and is thus moved to slow permanent storage. However, hard disks are way too slow to keep up with Ethereum blocks (or god forbid, sync a chain from genesis), and so much more expensive SSDs have to be used. Arguably, the current state size isn’t even the biggest problem. The biggest problem is that it is relatively cheap to grow this state, and state growth is permanent, so even if we can raise the cost for growing state, there is no way to make someone pay for the actual impact on the network, which is eternal.</p>
<p>A solution space, largely crystallizing around two ideas, has emerged:</p>
<ul>
<li>State rent – the idea that in order to keep a state element in active memory, a continuous payment is required, and</li>
<li>Statelessness – blocks come with full witnesses (e.g. Merkle proofs) and thus no state is required to validate whether a block is valid</li>
</ul>
<p>On the spectrum to statelessness, there are further ideas worth exploring:</p>
<ul>
<li>partial statelessness – reducing the amount of state required to validate blocks, by requiring witnesses only for some (old) state</li>
<li>weak statelessness – validating block requires no state, but proposing blocks requires the full state</li>
</ul>
<p>Vitalik has written some ideas how to put these into a common framework <a href="https://hackmd.io/@HWeNw8hNRimMm2m2GH56Cw/state_size_management">here</a>, showing that partial statelessness and state rent are very similar in that both require some form of payment for introducing something into active state, and a witness to reactivate state that has become inactive.</p>
<p>If you come from the Eth1 world, then you may think that partial statelessness with a remaining active state of 1 GB or even 100 MB is a great achievement, so why work so much harder to go for full statelessness? I argue that full (weak) statelessness unlocks a huge potential that any amount of partial statelessness cannot, and thus that we should work very hard to enable full statelessness.</p>
<h2 id="understanding-eth2-validators">Understanding Eth2 validators</h2>
<p>Eth1 has been criticised in the past for having very high hardware requirements, and though not all of these criticisms are fair (it is still very possible to run an Eth1 node on moderate but well chosen consumer hardware), they are to be taken seriously, especially since we want to scale Ethereum without compromising decentralization. For Eth2, we have thus set ourselves a very ambitious goal – to be able to run an Eth2 node and validator on very low-cost hardware, even a Raspberry Pi or a smartphone.</p>
<p>This is not the easy route, but the hard route to scaling. Other projects, like EOS and Solana, instead require much more performant hardware and internet connections. But I think for decentralization it is essential to keep the requirements on consensus nodes, as well as P2P nodes, very low.</p>
<p>In Eth2, the consensus node is the validator. There is an important difference with the consensus nodes in Eth1 and Eth2:</p>
<ul>
<li>In Eth1, the consensus nodes are miners. To “vote” for a chain, you have to produce a block on it. In other words, the consensus nodes and block producers are inseparable.</li>
<li>In Eth2, or rather its current first phase, the beacon chain, proposing blocks and forming consensus are two different functions: Blocks are proposed every 12 seconds by a randomly selected validator, but consensus is formed via attestations, with every validator voting for a chain <em>every epoch, that is, every 6.4 minutes</em>. Yes, at the moment, that is already almost 100,000 validators casting one vote every few minutes. Block producers have (almost <sup id="fnref:3"><a href="#fn:3" class="footnote">1</a></sup>) no influence on consensus, they only get to select what is included in a block<sup id="fnref:1"><a href="#fn:1" class="footnote">2</a></sup></li>
</ul>
<p>The property that block proposers are irrelevant for consensus opens up a significant design space. While for the beacon chain, block proposers are simply selected at random from the full validator set, for the shard chains, this doesn’t have to be true:</p>
<ul>
<li>One interesting possibility would be that for a shard, especially an Eth1 execution shard, there is a way for a validator to enter a list that they are capable of producing blocks. These validators may require better hardware and may need to have “full” state</li>
<li>Another possibility, which we are currently implementing for the data shards, is that anyone can be selected for proposing blocks, but the actual content of the block isn’t produced by the proposer; instead, different entities can bid on getting their pre-packaged blocks proposed.</li>
</ul>
<p>In both cases, weakly stateless validation means that all the other validators, who are not proposing blocks or preparing block content, do not need the state. That is a huge difference to Eth1: In Eth1, the consensus forming nodes (the miners) have high requirements anyway, so requiring them to keep full state seems fine. But with Eth2, we have the possibility of significantly lowering this requirement, and we should make use of it, to benefit decentralization and security.</p>
<h2 id="so-why-is-it-ok-to-have-expensive-proposers">So why is it ok to have expensive proposers?</h2>
<p>An important objection may be that it defeats decentralization if block proposing becomes expensive, even if we get cheap validators and P2P nodes. This is not the case. There is an important difference between “proposers” and “validators”:</p>
<ul>
<li>For validators, we need an honest supermajority, i.e. more than 2/3 of the total staked ETH must be honest. A similar thing can be said about P2P nodes – while there isn’t (as far as I know) a definite fraction of P2P nodes that must be honest, there is the requirement that everyone is connected to at least one honest P2P node in order to be able to be sure to always receive the valid chain; this could be 5% but in practice it is probably higher.</li>
<li>For proposers, we actually get away with much lower honesty requirements; note that unlike in Eth1, in Eth2 proposers do not get to censor past blocks (because they do not vote), but only get to decide about the content of their own block. Assuming that your transaction is not highly time critical, if 95% of proposers try to censor it, then the 20th proposer would still be able to get it safely included. (Low-latency censorship resistance is a different matter however, and in practice more difficult to achieve)</li>
</ul>
<p>This is why I am much less worried about increasing hardware requirements for proposers than for validators. I think it would be fine if we need proposers to run a PC with 128GB RAM that is fully capable of storing even a huge state, if we can keep normal validator requirements low. I would be worried if a PC that can handle these requirements costs 100,000$, but if we can keep it to under 5,000$, it seems unconscionable that the community would not react very quickly by introducing more proposers if censorship were detected.</p>
<p>Finally, let’s not neglect that there are <a href="https://ethresear.ch/t/flashbots-frontrunning-the-mev-crisis/8251">other reasons</a> why block proposing will likely be done by those with significant hardware investments anyway, as they are better at exploiting MEV.</p>
<p>Note that I am using the word “proposer” here for the entities that package blocks, which is not necessarily the same as the one who formally signs it and introduces them; they could be “sequencers” (for rollups) etc. For simplicity I call them proposers here, because I do not think any of the system would fundamentally break if we simply introduced a completely new role into the system that only proposes blocks and nothing else.</p>
<h2 id="the-benefits-of-going-stateless">The benefits of going stateless</h2>
<p>So far I haven’t argued why (at least weak, but not partial) statelessness is such a powerful paradigm; in the <a href="https://ethresear.ch/t/executable-beacon-chain/8271">executable beacon chain</a> proposal, reducing state from 10 GB to 1 GB or 100MB seems to unlock a lot of savings for validators, so why do we have to go all the way?</p>
<p>Because if we go all the way, the executable Eth1 blocks can become a shard. Note that in the executable beacon chain proposal, all validators have to run the full Eth1 execution all the time (or they risk signing invalid blocks). A shard should not have this property; the point of a shard is that only a committee needs to sign a block (so only 1/1024 of all validators); and the others don’t have to trust that the majority of this committee is honest <sup id="fnref:2"><a href="#fn:2" class="footnote">3</a></sup>, but only that it has at least one honest member, who would blow the whistle when it tries to do something bad. This is only possible if Eth1 becomes stateless:</p>
<ul>
<li>We want the load on all validators to be rougly equal, and free of extreme peaks. Thus, sending a validator to become an Eth1 committee member for a long time, like an hour or a day, is actually terrible: It means the validator still has to be dimensioned to be able to keep up with the full Eth1 chain in terms of bandwidth requirements. This is in addition to committees becoming much more attackable if they are chosen for a long time (for example through bribing attacks)</li>
<li>We want to be able to have easy fraud proofs for Eth1 blocks, because otherwise the other validators can’t be sure that the committee has done its work correctly. The easiest way to get fraud proofs is if a block can be its own fraud proof: If a block is invalid, you simply have to broadcast the block itself to show that fraud has happened.</li>
</ul>
<p>So Eth1 can become a shard (that requires much less resources, like 1/100, to maintain) only if it becomes fully stateless. And at the same time, only then can we introduce more execution shards, in addition to the data shards.</p>
<h2 id="arent-caches-always-good">Aren’t caches always good?</h2>
<p>So what if we go to full statelessness but introduce a 10 MB cache? Or 1 MB? That can easily be downloaded even if you only want to check one block, because you are assigned to a committee or you received it as a fraud proof?</p>
<p>You can do this, but there is a simple way to see that this is very unlikely to be optimal, if the majority of validators only validate single blocks. Let’s say we target 1 MB blocks and in addition, we have a 1 MB cache. That means, every time a validator wants to validate a block, they have to download 2 MB – both the block and the cache. They have to download the cache every time, except if they download <em>all</em> blocks to also keep the cache up to date, which is exactly what we want to avoid.</p>
<p>This means, at the same cost of having blocks of size 1 MB with a cache of 1 MB, we could set the cache to 0 and allow blocks of 2 MB.</p>
<p>Now, it’s clear that a block of 2 MB is at least as powerful as having 1 MB blocks with 1 MB cache. The reason is that the 2 MB block could simply include a 1 MB cache if that’s what we thought was optimal – you simply commit to the cache at every block, and reintroduce the full cache in the next block. This is unlikely to be the best use of that 1 MB block space, but you could, so it’s clear that a 2 MB block is at least as powerful as a 1 MB block with a 1 MB cache. It’s much more likely that the extra 1 MB would be of better use to allow more witnesses to be introduced.</p>
<h2 id="binary-tries-or-verkle-tries">Binary tries or verkle tries?</h2>
<p>I think overall, the arguments for shooting for full weak statelessness, and not partial statelessness or state rent, are overwhelming. It will impact users much less: They simply don’t have to think about it. The only thing they have to do, when constructing transactions, is to add witnesses (so that the P2P network is able to validate it’s a valid transaction). Creating these witnesses is so cheap that it’s unimaginable that there won’t be a plethora of services offering it. Most wallets, in practice, already rely on external services and don’t require users to run their own nodes. Getting the witnesses is a trivial thing to add<sup id="fnref:4"><a href="#fn:4" class="footnote">4</a></sup>.</p>
<p>Partial statelessness, or state rent, adds a major UX hurdle on the way to full weak statelessness, where it would disappear again. It has some merit when you consider how difficult statelessness is to achieve using just binary Merkle tries, and that the gas changes required to allow Merkle trie witnesses will themselves be detrimental to UX.</p>
<p>So in my opinion, we should go all the way to <a href="https://notes.ethereum.org/_N1mutVERDKtqGIEYc-Flw">verkle tries</a> now. They allow us to have manageable, <1 MB witnesses, with only moderate gas repricings as proposed by <a href="https://eips.ethereum.org/EIPS/eip-2929">EIP-2929</a> and charging for code chunks. Their downsides are well contained and of little practical consequence for users:</p>
<ul>
<li>A new cryptographic primitive to learn for developers</li>
<li>adding more non post-quantum secure cryptography
The second sounds scary, but we will already introduce KZG commitments in Eth2 for data availability sampling, and we are using elliptic curve based signatures anyway. Several post quantum upgrades of the combined Eth1 and Eth2 chain are required, because there simply aren’t practical enough post quantum alternatives around now. We can’t stop progress because of this. The next 5 years are extremely important in terms of adoption. The way forward is to implement the best we can now, and in 5-10 years, when STARKs are powerful enough, we will introduce a full post quantum upgrade of all primitives.</li>
</ul>
<p>In summary, verkle tries will solve our state problems for the next 5 years to come. We will be able to implement full (weak) statelessness now, with almost no impact on users and smart contract developers; we will be able to implement gas limit increases (because validation becomes faster) and more execution shards – and all this comes with little downside in terms of security and decentralization.</p>
<p>The big bullet to bite is for everyone to learn to understand how KZG commitments and verkle tries work. Since Eth2 will use KZG commitments for data availability, most of this work will soon be required of most Ethereum developers anyway.</p>
<div class="footnotes">
<ol>
<li id="fn:3">
<p>Almost no influence, because there is now a small modification to improve resilience against certain balancing attacks, that does give block proposers a small amount of short term influence on the fork choice <a href="#fnref:3" class="reversefootnote">↩</a></p>
</li>
<li id="fn:1">
<p>To be precise, they can have an influence, if they start colluding and censoring large numbers of attestations, but single block producers have a completely negligible effect on how consensus is formed <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>A dishonest committee can do some annoying things, that could impact the network and introduce major latency, but it cannot introduce invalid/unavailable blocks <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:4">
<p>Users who do want to run their own node can still use an external service to get witnesses. Doing so is trustless, since the witnesses are their own proof if you know what the latest state root is <a href="#fnref:4" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>One of Eth1’s biggest problem is the current state size. Estimated at around 10-100GB (depending on how exactly it is stored), it is impractical for many nodes to keep in working memory, and is thus moved to slow permanent storage. However, hard disks are way too slow to keep up with Ethereum blocks (or god forbid, sync a chain from genesis), and so much more expensive SSDs have to be used. Arguably, the current state size isn’t even the biggest problem. The biggest problem is that it is relatively cheap to grow this state, and state growth is permanent, so even if we can raise the cost for growing state, there is no way to make someone pay for the actual impact on the network, which is eternal.Running a validator client on a Raspberry Pi2020-11-20T16:16:00+00:002020-11-20T16:16:00+00:00https://dankradfeist.de/ethereum/2020/11/20/staking-on-raspi<h1 id="running-a-validator-client-on-a-raspberry-pi">Running a Validator Client on a Raspberry Pi</h1>
<p>I assume familiarity with Staking on Eth2.0 in general.</p>
<p>This is a quick manual on how to run the Validator Client, and only the Validator client, on a Raspberry Pi. I have decided to run my beacon node on a separate machine with more resources. It is probably possible (possibly after more optimizations) to run your whole staking operations on a Raspberry Pi, which would be a good idea for cost/energy efficiency – but at this point, I am much more concerned about security. The idea here is to effectively use the Raspberry Pi as a simple “Hardware Security Module” or HSM: It acts to protect the keys and contains the validator slashing protection.</p>
<p><strong>Advantages of this setup</strong></p>
<ul>
<li>Better protection of validator keys than at-home staking with single node – the machine containing the staking keys does not have a direct internet connection</li>
<li>Beacon node is not running on resource-constrained Raspberry Pi, so should run safely even under non-optimal conditions where the Raspberry Pi might struggle, e.g. very high number of validators or long periods of non-finality</li>
</ul>
<p><strong>What this setup does not cover</strong></p>
<ul>
<li>Optimized for security, not cost (double hardware, higher electricity consumption)</li>
<li>Not optimized for liveness. An additional point of failure by relying on two machines for staking</li>
</ul>
<p>The Raspberry Pi in this configuration can be seen as a kind of Hardware Security Module or HSM. Until dedicated HSMs for staking become available, this is my suggestion on how to reach nearly equivalent.</p>
<p>As there is no mainnet, I will describe how to run on the Medalla testnet. Once I update my setup for mainnet launch I will update this guide.</p>
<h2 id="diagram">Diagram</h2>
<p>As an illustration, here is a diagram of the node configuration</p>
<p><img src="/assets/nodediagram.png" alt="Diagram" /></p>
<h2 id="hardware">Hardware</h2>
<p>To run the Validator Client, I use a Raspberry Pi 4 4GB. If you want to compile on your Raspberry Pi, I recommend 8GB (at the time of writing, the lighthouse build did not complete on 4GB, but I think it would on 8GB). What you need in addition:</p>
<ul>
<li>A USB-C charger</li>
<li>Micro-SD card – I recommend not skimping on this one as cheap cards may be unreliable, especially after many write operations. I am using a Samsung EVO 32 GB</li>
<li>Raspberry Pi case – since I don’t like to have a fan, I got the “GeekPi Argon NEO Aluminum Case” which is fanless</li>
<li>Another machine to run the beacon node. Mine has 2 Ethernet ports, of which I use 1 to connect to the router and one for the Raspberry Pi, so the Raspberry Pi is not visible on the local network at all</li>
</ul>
<h2 id="installation">Installation</h2>
<p>First, we need an operating system on the Raspberry Pi. I use Ubuntu 20.04 LTS Server, as I’m most familiar with Ubuntu. I will give a short summary of the installation which should be enough for technical user, otherwise you can find a manual on how to install it <a href="https://ubuntu.com/tutorials/how-to-install-ubuntu-on-your-raspberry-pi">here</a>. The installation is quite easy even without a display attached:</p>
<ul>
<li>Use <a href="https://www.raspberrypi.org/software/">rpi-imager</a> to install an image of Ubuntu 20.04 LTS Server (64 bit) on your MicroSD card</li>
<li>Use the “system-boot” partition to configure your Pi for the first launch. In order to run Headless, this is essential: You want to be able to SSH into the Pi. In my case, I first configured it to my home network because I want to be able to download all updates before I cut it off the internet. In order to connect to your wifi, edit the <code class="highlighter-rouge">network-config</code> file to add your SSID and password:
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>wifis:
wlan0:
dhcp4: true
optional: true
access-points:
<wifi network name>:
password: "<wifi password>"
</code></pre></div> </div>
</li>
<li>Now you can insert the MicroSD card into the Raspberry Pi and boot the Pi by attaching USB C power</li>
<li>Find out the IP address (Your router interface might have a list of connected devices. Otherwise a quick <code class="highlighter-rouge">nmap 192.168.0.0/24</code> will also do the trick)</li>
<li>Log in via SSH using the credentials <code class="highlighter-rouge">ubuntu/ubuntu</code>. You will be asked to change password</li>
<li>After this, I decided to create a user in my own name using <code class="highlighter-rouge">adduser [username]</code>. Make sure to add your user to the sudo group using <code class="highlighter-rouge">addgroup [username] sudo</code>. This step can be skipped if you just want to use the <code class="highlighter-rouge">ubuntu</code> user</li>
<li>Bring your system up to date using <code class="highlighter-rouge">sudo apt update && sudo apt upgrade</code></li>
</ul>
<p>Now it’s time to configure the Pi for a static network connection. Edit <code class="highlighter-rouge">/etc/netplan/50-cloud-init.yaml</code> to remove the wifi (we don’t want the Pi to be in the local network on production) and add a static Ethernet address:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code> network:
version: 2
ethernets:
eth0:
dhcp4: false
optional: true
addresses:
- 192.168.1.2/24
</code></pre></div></div>
<p>Disable the audomatic cloud configuration by creating the file <code class="highlighter-rouge">/etc/cloud/cloud.cfg.d/99-disable-network-config.cfg</code> and adding the line</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code> network: {config: disabled}
</code></pre></div></div>
<p>Configure the beacon node machine’s main machine second Ethernet port to a static <code class="highlighter-rouge">192.168.1.1/24</code> address. You can now connect the Pi and the main machine via ssh to <code class="highlighter-rouge">192.168.1.2</code>.</p>
<h2 id="cross-compile-the-lighthouse-client">Cross compile the lighthouse client</h2>
<p>I want to run the lighthouse client because it has performed well in testnets so far and seems to have suffered few critical bugs compared to other clients. Unfortunately, lighthouse does not come with a compile target just for the Validator Client, which is the part that I want to run on my Pi – you need to build everything. I couldn’t get that to complete on my 4GB Pi – it may be possible (but likely still slow) on an 8GB Pi. The easier way is to cross compile. Here is how to do this on ubuntu:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mkdir ~/ethereum
cd ~/ethereum
curl https://sh.rustup.rs -sSf | sh
git checkout https://github.com/sigp/lighthouse
cd lighthouse
cargo install cross
make build-aarch64-portable
</code></pre></div></div>
<p>Via scp you can copy the resulting binaries to the raspberry Pi:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>scp -r target/aarch64-unknown-linux-gnu/release 192.168.1.2:~/lighthouse
</code></pre></div></div>
<p>Install your validator keys to <code class="highlighter-rouge">~/.lighthouse/medalla/validators</code>. Optionally you can add the passwords to the validator keystores to <code class="highlighter-rouge">~/.lighthouse/medalla/secrets</code>. I prefer not to do this, so that if someone happened to take the Pi from my house without maintaining continuous power supply, they would not have access to the validator keys. However, this means whenever the Pi reboots, I have to log in to enter the password; it’s a tradeoff depending on how easily you can do this (probably not ideal if you are planning long trips without internet connection).</p>
<p>If you are planning to store the keystore passwords, you can create a systemd service to launch the VC automatically on boot. Create <code class="highlighter-rouge">/etc/systemd/system/lighthousevc.service</code>:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[Unit]
After=network.service
[Service]
ExecStart=/home/[username]/ethereum/lighthouse/target/release/lighthouse vc --beacon-node http://192.168.1.1:5052
User=[username]
[Install]
WantedBy=default.target
</code></pre></div></div>
<p>Otherwise, I recommend running it inside a screen session to be able to keep it running when you close the ssh session:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>screen /home/[username]/ethereum/lighthouse/target/release/lighthouse vc --beacon-node http://192.168.1.1:5052
</code></pre></div></div>
<h2 id="compiling-lighthouse-and-openethereum-for-the-beacon-node">Compiling Lighthouse and OpenEthereum for the Beacon node</h2>
<p>These instructions are if you are also using Ubuntu 20.04 on you main (beacon chain) machine. It is possible to run this setup with a different operating system.</p>
<p>You need to install openethereum (or another Mainnet ethereum client, such as Geth). Download and install it into <code class="highlighter-rouge">~/ethereum/openethereum</code>.</p>
<p>Build the lighthouse client for the host system in order to be able to run the beacon node:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd ~/ethereum/lighthouse/
make
</code></pre></div></div>
<h2 id="first-launch">First launch</h2>
<p>We are now ready to run the Eth1 and the Beacon node:
To start the Eth1 node, run</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/ethereum/openethereum/openethereum --chain goerli --jsonrpc-interface=all
</code></pre></div></div>
<p>(This is for the Goerli testnet, required for running on Medalla – change this for mainnet)</p>
<p>To start the Beacon node, in another terminal, run</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>~/ethereum/lighthouse/target/release/lighthouse bn --testnet medalla --http-port 5052 --eth1-endpoint http://localhost:8545 --http-address 192.168.1.1 --http
</code></pre></div></div>
<p>Both the Eth1 and Beacon node should start syncing. Note that this can take a long time – many hours on testnets and several days for an Eth1 mainnet nodes.</p>
<h2 id="installing-the-beacon-node-as-daemons">Installing the beacon node as daemons</h2>
<p>To launch the Eth1 and beacon nodes automatically as daemons, we can create systemd files:</p>
<p>For Openethereum: <code class="highlighter-rouge">/etc/systemd/system/openethereum.service</code></p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[Unit]
After=network.service
[Service]
ExecStart=/home/[username]/ethereum/openethereum/openethereum --chain goerli --jsonrpc-interface=all
User=[username]
[Install]
WantedBy=default.target
</code></pre></div></div>
<p>For the Beacon node service: <code class="highlighter-rouge">/etc/systemd/system/lighthouse.service</code></p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[Unit]
After=network.service
[Service]
ExecStart=/home/[username]/ethereum/lighthouse/target/release/lighthouse bn --http-port 5052 --eth1-endpoint http://localhost:8545 --http-address 192.168.1.1 --http
User=[username]
[Install]
WantedBy=default.target
</code></pre></div></div>
<p>These services now be started and stopped using the nomal systemd interface using <code class="highlighter-rouge">sudo service openethereum [start/stop]</code> and <code class="highlighter-rouge">sudo service lighthouse [start/stop]</code>.</p>
<h2 id="synchronize-the-raspberry-pis-clock">Synchronize the Raspberry Pi’s clock</h2>
<p>Note that the Raspberry Pi has no battery to keep a time when it’s shut down, and it also isn’t connected to the Internet so Ubuntu’s default mechanism to synchronize the time will not work. So we will need to have an NTP server on our beacon node machine to keep it synced. On the beacon chain machine, install the <code class="highlighter-rouge">ntp</code> server:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt install ntp
sudo service ntp start
</code></pre></div></div>
<p>This is also a good time to adjust your NTP settings. Keeping your time well in sync is essential as a staker. There are several attacks via time services, and unfortunately rogue NTP servers cannot be ruled out. Our best defence against these attacks is avoiding all large time adjustments. NTP has a nice parameter, and you should add this to your <code class="highlighter-rouge">/etc/ntp.conf</code> file to stop all adjustments of more than 5 seconds (this will mean if your clock drifts by more than 5s, you have to set it manually – this should never really happen unless you have a long power outage).</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>tinker panic 5
</code></pre></div></div>
<p>Now on the Pi, edit <code class="highlighter-rouge">/etc/systemd/timesyncd.conf</code> to connect to the other machine’s NTP server:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[Time]
NTP=192.168.1.1
</code></pre></div></div>
<h2 id="optional-harden-your-pi-using-ufw">Optional: Harden your Pi using ufw</h2>
<p>You can use a firewall to only allow the ssh port on the Pi. Note the configuration I have given will already do that, however if you are planning to use your Pi for anything else, this might be an extra measure to prevent accidental opening of additional ports.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt-get install ufw
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw enable
</code></pre></div></div>
<h2 id="future-work">Future work</h2>
<ul>
<li>Adapt this guide for mainnet where necessary</li>
<li>Simplify updating ubuntu and lighthouse node on the Pi</li>
</ul>Running a Validator Client on a Raspberry PiKZG polynomial commitments2020-06-16T16:16:00+00:002020-06-16T16:16:00+00:00https://dankradfeist.de/ethereum/2020/06/16/kate-polynomial-commitments<h1 id="kzg-polynomial-commitments">KZG polynomial commitments</h1>
<h2 id="introduction">Introduction</h2>
<p>I want to try and give an introduction to the commitment scheme introduced by Kate, Zaverucha and Goldberg <sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>. This post does not aim to be mathematically or cryptographically rigorous or complete – it is meant to be an introduction.</p>
<p>This scheme is often also called Kate polynomial commitment scheme (pronounced <a href="https://www.cs.purdue.edu/homes/akate/howtopronounce.html">kah-tay</a>). As a polynomial commitment scheme, it allows a <em>prover</em> to compute a <em>commitment</em> to a polynomial, with the properties that this commitment can later be opened at any position: The <em>prover</em> shows that the value of the polynomial at a certain position is equal to a claimed value.</p>
<p>It is called a <em>commitment</em>, because having sent the commitment value (an elliptic curve point) to someone (the <em>verifier</em>), the prover cannot change the polynomial they are working with. They will only be able to provide valid proofs for one polynomial, and if they are trying to cheat, they will either fail to produce a proof or the proof will be rejected by the verifier.</p>
<h3 id="prerequisites">Prerequisites</h3>
<p>I highly recommend reading <a href="https://vitalik.ca/general/2017/01/14/exploring_ecp.html">Vitalik Buterin’s post on elliptic curve pairings</a>, if you aren’t familiar with finite fields, elliptic curves, and pairings.</p>
<h3 id="comparison-to-merkle-trees">Comparison to Merkle trees</h3>
<p>If you’re familiar with Merkle trees, I want to try to give a bit more of an intuition on the difference between those and Kate commitments. A Merkle tree is what cryptographers call a <em>vector commitment</em>: Using a Merkle tree of depth <script type="math/tex">d</script>, you can compute a commitment to a vector (that is, a list of elements of fixed length) <script type="math/tex">a_0, \ldots, a_{2^d-1}</script>. Using the familiar <em>Merkle proofs</em>, you can provide a proof that an element <script type="math/tex">a_i</script> is a member of this vector at position <script type="math/tex">i</script> using <script type="math/tex">d</script> hashes.</p>
<p>We can actually make a polynomial commitment out of Merkle trees: Recall that a polynomial <script type="math/tex">p(X)</script> of degree <script type="math/tex">n</script> is nothing other than a function
<script type="math/tex">p(X) = \sum_{i=0}^{n} p_i X^i</script>
where the <script type="math/tex">p_i</script> are the coefficient of the polynomial.</p>
<p>We can easily commit to a polynomial of degree <script type="math/tex">n=2^{d}-1</script> by setting <script type="math/tex">a_i=p_i</script> and computing the Merkle root of its coefficients. Proving an evaluation means that the prover wants to show to the verifier that <script type="math/tex">p(z) = y</script> for some <script type="math/tex">z</script>. The prover can do this by sending the verifier all the <script type="math/tex">p_i</script> and the verifier computing that <script type="math/tex">p(z)</script> is indeed <script type="math/tex">y</script>.</p>
<p>This is of course an extremely stupid polynomial commitment, but it will help us understand what the advantages of real polynomial commitments are. Let’s hase a look at the properties:</p>
<ol>
<li>The commitment size is a single hash (the Merkle root). A cryptographic hash of sufficient security typically needs 256 bits, i.e. 32 bytes.</li>
<li>To prove an evaluation, the prover needs to send all the <script type="math/tex">p_i</script>, so the proof size is linear in the degree of the polynomial, and the verifier needs to do linear work (they need to evaluate the polynomial at the place <script type="math/tex">z</script> by computing <script type="math/tex">p(z)=\sum_{i=0}^{n} p_i z^i</script>).</li>
<li>The scheme does not hide anything about the polynomial – the prover sends the whole polynomial in the clear, coefficient by coefficient.</li>
</ol>
<p>Now let’s look at what the Kate scheme achieves on these metrics:</p>
<ol>
<li>The commitment size is one group element of an elliptic group that admits pairings. For example, with BLS12_381, that would be 48 bytes.</li>
<li>The proof size, <em>independent</em> from the size of the polynomial, is also always only one group element. Verification, also independent from the size of the polynomial, requires a two group multiplications and two pairings, no matter what the degree of the polynomial ist.</li>
<li>The scheme <em>mostly</em> hides the polynomial – indeed, an infinite number of polynomials will have exactly the same Kate commitment. However, it is not perfectly hiding: If you can guess the polynomial (for example because it is very simple or in a small set of possible polynomials) you can find out which polynomial was committed to.</li>
</ol>
<p>Additionally, it is actually possible to combine the proof for any number evaluations in one group element. These properties make the Kate scheme very attractive for zero knowledge proof systems, such as PLONK and SONIC. But they also make it very interesting for a more mundane purpose and use it as a vector commitment, which we will come to below.</p>
<h2 id="on-elliptic-curves-and-pairings">On Elliptic curves and pairings</h2>
<p>As mentioned in the prerequisites, I strongly recommend <a href="https://vitalik.ca/general/2017/01/14/exploring_ecp.html">Vitalik Buterin’s post on elliptic curve pairings</a>, it includes all the basics needed to understand this post – in particular finite fields, elliptic curves, and pairings.</p>
<p>Let <script type="math/tex">\mathbb G_1</script> and <script type="math/tex">\mathbb G_2</script> be two elliptic curves with a pairing <script type="math/tex">e: \mathbb G_1 \times \mathbb G_2 \rightarrow \mathbb G_T</script>. Let <script type="math/tex">p</script> be the order of <script type="math/tex">\mathbb G_1</script> and <script type="math/tex">\mathbb G_2</script>, and <script type="math/tex">G</script> and <script type="math/tex">H</script> be generators of <script type="math/tex">\mathbb G_1</script> and <script type="math/tex">\mathbb G_2</script>. We will use a very useful shorthand notation</p>
<script type="math/tex; mode=display">\displaystyle
[x]_1 = x G \in \mathbb G_1 \text{ and } [x]_2 = x H \in \mathbb G_2</script>
<p>for any <script type="math/tex">x \in \mathbb F_p</script>.</p>
<h3 id="trusted-setup">Trusted setup</h3>
<p>Let’s assume we have a trusted setup, so that for some secret <script type="math/tex">s</script>, the elements <script type="math/tex">[s^i]_1</script> and <script type="math/tex">[s^i]_2</script> are available to both prover and verifier for <script type="math/tex">i=0, \ldots, n-1</script>.</p>
<p>One way to get this secret setup is to have an airgapped computer compute a random number <script type="math/tex">s</script>, compute all the group elements <script type="math/tex">[s^i]_x</script>, and only send those elements (and not <script type="math/tex">s</script>) over a wire, and then burn that computer. Of course this is not a great solution because you would have to trust whoever operated that computer that they didn’t build a secret communication channel that tells them the secret <script type="math/tex">s</script>.</p>
<p>In practice this is usually implemented via a secure multiparty computation (MPC), which allows creating these group elements by a group of computers in a way such that no single computer will know the secret <script type="math/tex">s</script>, and all of them would have to collude (or be compromised) in order to reveal it.</p>
<p>Note one thing that is not possible: You can’t do this by just selecting a random group element <script type="math/tex">[s]_1</script> (for which <script type="math/tex">s</script> is unknown) and compute the other group elements from it. It is impossible to compute <script type="math/tex">[s^2]_1</script> without knowing <script type="math/tex">s</script>.</p>
<p>Now, elliptic curve cryptography basically tells us that it’s impossible to find out what <script type="math/tex">s</script> actually is from the trusted setup group elements. It’s a number in <script type="math/tex">\mathbb F_p</script>, but the prover cannot find the actual number. They can only do certain computations with the elements that they are given. So for example, they can easily compute things like <script type="math/tex">c [s^i]_1 = c s^i G = [cs^i]_1</script> by elliptic curve multiplication, and since they can add elliptic curve points, they can also compute something like <script type="math/tex">c [s^i]_1 + d [s^j]_1 = (c s^i + d s^j) G = [cs^i + d s^j]_1</script>. In fact, if <script type="math/tex">p(X) = \sum_{i=0}^{n} p_i X^i</script> is a polynomial, the prover can compute</p>
<script type="math/tex; mode=display">\displaystyle
[p(s)]_1 = [\sum_{i=0}^{n} p_i s^i]_1 = \sum_{i=0}^{n} p_i [s^i]_1</script>
<p>This is interesting – using this trusted setup, everyone can basically evaluate a polynomial at some secret point <script type="math/tex">s</script> that nobody knows. Except they don’t get the output as a natural number, they only get the elliptic curve point <script type="math/tex">[p(s)]_1 = p(s) G</script>, but it turns out that this is already really powerful.</p>
<h2 id="kate-commitment">Kate commitment</h2>
<p>In the Kate commitment scheme, the element <script type="math/tex">C = [p(s)]_1</script> is the commitment to the polynomial <script type="math/tex">p(X)</script>.</p>
<p>Now you may ask the question: Could the prover (without knowing <script type="math/tex">s</script>) find another polynomial <script type="math/tex">q(X) \neq p(X)</script> that has the same commitment, i.e. such that <script type="math/tex">[p(s)]_1 = [q(s)]_1</script>? Let’s assume that this were the case. Then it would mean that <script type="math/tex">[p(s) - q(s)]_1=[0]_1</script>, implying <script type="math/tex">p(s)-q(s)=0</script>.</p>
<p>Now, <script type="math/tex">r(X) = p(X)-q(X)</script> is itself a polynomial. We know that it’s not constant because <script type="math/tex">p(X) \neq q(X)</script>. It is a well-known fact that any non-constant polynomial of degree <script type="math/tex">n</script> can have at most <script type="math/tex">n</script> zeroes: This is because if <script type="math/tex">r(z)=0</script>, then <script type="math/tex">r(X)</script> is divisible by the linear factor <script type="math/tex">X-z</script>; since we can divide by one linear factor for each zero, and each division reduces the degree by one, so there can’t be more than <script type="math/tex">n</script>.<sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup></p>
<p>Since the prover doesn’t know <script type="math/tex">s</script>, the only way they could achieve that <script type="math/tex">p(s)-q(s)=0</script> is by making <script type="math/tex">p(X)-q(X)=0</script> in as many places as possible. But since they can do that in at most <script type="math/tex">n</script> places, as we’ve just proved, they are very unlikely to succeed: since <script type="math/tex">n</script> is much smaller than the degree of the curve <script type="math/tex">p</script>, the probability that <script type="math/tex">s</script> will be one of the points they chose to make <script type="math/tex">p(X)=q(X)</script> will be vanishingly tiny. To get a feeling for this probability, suppose we use the largest trusted setups currently in existense, where <script type="math/tex">n=2^{28}</script>, and compare it to the curve order <script type="math/tex">p \approx 2^{256}</script>: The probability that any given polynomial <script type="math/tex">q(X)</script> that the attacker has crafted to agree with <script type="math/tex">p(X)</script> in as many points as possible – <script type="math/tex">n=2^{28}</script> points – results in the same commitment (<script type="math/tex">p(s)=q(s)</script>) will only be <script type="math/tex">2^{28}/2^{256} = 2^{28-256} \approx 2 \cdot 10^{-69}</script>. That is an incredibly low probability and in practice means the attacker cannot pull this off.</p>
<h3 id="multiplying-polynomials">Multiplying polynomials</h3>
<p>So far we have seen that we can evaluate a polynomial at a secret point <script type="math/tex">s</script>, and that gives us a way to commit to one unique polynomial – in the sense that while there are many polynomials with the same commitment <script type="math/tex">C=[p(s)]_1</script>, they are impossible to actually compute in practice (cryptographers call this <em>computationally binding</em>).</p>
<p>However, we are still missing the ability to “open” this commitment without actually sending the whole polynomial over to the verifier. In order to do this, we need to use the pairing. Above, we noticed that we can do some linear operations with the secret elements; for example, we can compute <script type="math/tex">[p(s)]_1</script> as a commitment to <script type="math/tex">p(X)</script>, and we could also add two commitments for <script type="math/tex">p(X)</script> and <script type="math/tex">q(X)</script> to make a combined commitment for <script type="math/tex">p(X)+q(X)</script>: <script type="math/tex">[p(s)]_1+[q(s)]_1=[p(s)+q(s)]_1</script>.</p>
<p>What we’re missing is an ability to multiply two polynomials. If we can do that, we can use some cool properties of polynomials to achieve what we want. While elliptic curves themselves don’t allow this, luckily we can do it with pairings: We have that</p>
<script type="math/tex; mode=display">\displaystyle
e([a]_1, [b]_2) = e(G, H)^{(ab)} = [ab]_T</script>
<p>where we introduce the new notation <script type="math/tex">[x]_T = e(G, H)^x</script>. So while we unfortunately can’t just multiply two field elements <em>inside</em> an elliptic curve and get their product as an elliptic curve element (this would be a property of so-called Fully Homomorphic Encryption or FHE; elliptic curves are only <em>additively homomorphic</em>), we can multiply two field elements if we commited to them in different curves (one in <script type="math/tex">\mathbb G_1</script> and one in <script type="math/tex">\mathbb G_2</script>), and the output is a <script type="math/tex">\mathbb G_T</script> element.</p>
<p>This gets us to the core of the Kate proof: Remember what we said about linear factors earlier: A polynomial is divisible by <script type="math/tex">X-z</script> if it has a zero at <script type="math/tex">z</script>. It is easy to see that the converse is also true – if it is divisible by <script type="math/tex">X-z</script>, then it clearly has a zero at <script type="math/tex">z</script>: Being divisible by <script type="math/tex">X-z</script> means that we can write <script type="math/tex">p(X)=(X-z) \cdot q(X)</script> for some polynomial <script type="math/tex">q(X)</script>, and this is clearly zero at <script type="math/tex">X=z</script>.</p>
<p>Now let’s say we want to prove that <script type="math/tex">p(z)=y</script>. We will use the polynomial <script type="math/tex">p(X)-y</script> – this polynomial is clearly zero at <script type="math/tex">z</script>, so we can use the knowledge about linear factors. Let <script type="math/tex">q(X)</script> be the polynomial <script type="math/tex">p(X)-y</script> divided by the linear factor <script type="math/tex">X-z</script>, i.e.</p>
<script type="math/tex; mode=display">\displaystyle
q(X) = \frac{p(X)-y}{X-z}</script>
<p>which is equivalent to saying that <script type="math/tex">q(X)(X-z) = p(X)-y</script>.</p>
<h3 id="kate-proofs">Kate proofs</h3>
<p>Now a Kate proof for the evaluation <script type="math/tex">p(z)=y</script> is defined as <script type="math/tex">\pi=[q(s)]_1</script>. Remember the commitment to the polynomial <script type="math/tex">p(X)</script> is defined as <script type="math/tex">C=[p(s)]_1</script>.</p>
<p>The verifier checks this proof using the following equation:</p>
<script type="math/tex; mode=display">\displaystyle
e(\pi,[s-z]_2) = e(C-[y]_1, H)</script>
<p>Note that the verifier can compute <script type="math/tex">[s-z]_2</script>, because it is just a combination of the element <script type="math/tex">[s]_2</script> from the trusted setup and <script type="math/tex">z</script> is the point at which the polynomial is evaluated. Equally they know <script type="math/tex">y</script> as the claimed value <script type="math/tex">p(z)</script>, thus they can compute <script type="math/tex">[y]_1</script> as well. Now why should this check convince the verifier that <script type="math/tex">p(z)=y</script>, or more precisely, that the polynomial committed to by <script type="math/tex">C</script> evaluated at <script type="math/tex">z</script> is <script type="math/tex">y</script>?</p>
<p>We need to evaluate two properties: <em>Correctness</em> and <em>soundness</em>. <em>Correctness</em> means that, if the prover followed the steps as we defined, they can produce a proof that will check out. This is usually the easy part. <em>Soundness</em> is the property that they cannot produce an “incorrect” proof – they cannot trick the verifier into believing that <script type="math/tex">p(z)=y'</script> for some <script type="math/tex">y'\neq y</script>.</p>
<p>Let’s start by writing out the equation in the pairing group:</p>
<script type="math/tex; mode=display">\displaystyle
[q(s) \cdot (s-z)]_T = [p(s) - y]_T</script>
<p><em>Correctness</em> should now be immediately apparent – this is just the equation <script type="math/tex">q(X)(X-z) = p(X)-y</script> evaluated at the random point <script type="math/tex">s</script> that nobody knows.</p>
<p>Now, how do we know it’s sound and the prover cannot create fake proofs? Let’s think of it in terms of polynomials. If the prover wants to follow the way we sketched out to construct a proof, they have to somehow divide <script type="math/tex">p(X)-y'</script> by <script type="math/tex">X-z</script>. But <script type="math/tex">p(z)-y'</script> is not zero, so they cannot perform the polynomial division, as there will always be a remainder. So this clearly doesn’t work.</p>
<p>So then they can try to work directly in the elliptic group: What if, for some commitment <script type="math/tex">C</script>, they could compute the elliptic group element</p>
<script type="math/tex; mode=display">\displaystyle
\pi_\text{Fake} = (C-[y']_1)^{\frac{1}{s-z}}</script>
<p>If they could do this, they could obviously just prove anything they want. Intuitively, this is hard because you have to exponentiate by something that involves <script type="math/tex">s</script>, but you don’t know anything about <script type="math/tex">s</script>. To prove it rigorously, you need a cryptographic assumption about proofs with pairings, the so-called <script type="math/tex">q</script>-strong SDH assumption <sup id="fnref:3"><a href="#fn:3" class="footnote">3</a></sup>.</p>
<h3 id="multiproofs">Multiproofs</h3>
<p>So far we have shown how to prove an evaluation of a polynomial at a single point. Note that this is already pretty amazing: You can show that some polynomial that could be any degree – say <script type="math/tex">2^{28}</script> – at some point takes a certain value, by only sending a single group element (that could be <script type="math/tex">48</script> bytes, for example in BLS12_381). The toy example of using Merkle trees as polynomial commitments would have needed to send <script type="math/tex">2^{28}</script> elements – all the coefficients of the polynomial.</p>
<p>Now we will go one step further and show that you can evaluate a polynomial at <em>any</em> number of points and prove this, still using only one group element. In order to do this, we need to introduce another concept: The Interpolation polynomial. Let’s say we have a list of <script type="math/tex">k</script> points <script type="math/tex">(z_0, y_0), (z_1, y_1), \ldots, (z_{k-1}, y_{k-1})</script>: Then we can always find a polynomial of degree less than <script type="math/tex">k</script> that goes through all of these points. One way to see this is to use Lagrange interpolation, which give an explicit formula for this polynomial <script type="math/tex">I(X)</script>:</p>
<script type="math/tex; mode=display">\displaystyle
I(X) = \sum_{i=0}^{k-1} y_i \prod_{j=0 \atop j \neq i}^{k-1} \frac{X-z_j}{z_i-z_j}</script>
<p>Now let’s assume that we know <script type="math/tex">p(X)</script> goes through all these points. Then the polynomial <script type="math/tex">p(X)-I(X)</script> will clearly be zero at each <script type="math/tex">z_0, z_1, \ldots, z_{k-1}</script>. This means that it is divisible by all the linear factors <script type="math/tex">(X-z_0), (X-z_1), \ldots (X-z_{k-1})</script>. We combine them all together in the so-called <em>zero polynomial</em></p>
<script type="math/tex; mode=display">\displaystyle
Z(X) = (X-z_0) \cdot (X-z_1) \cdots (X-z_{k-1})</script>
<p>Now, we can compute the quotient</p>
<script type="math/tex; mode=display">\displaystyle
q(X) = \frac{p(X) - I(X)}{Z(X)}</script>
<p>Note that this is possible because <script type="math/tex">p(X)-I(X)</script> is divisible by all the linear factors in <script type="math/tex">Z(X)</script>, so it is divisible by the whole of <script type="math/tex">Z(X)</script>.</p>
<p>We can now define the Kate multiproof for the evaluations <script type="math/tex">(z_0, y_0), (z_1, y_1), \ldots, (z_{k-1}, y_{k-1})</script>: <script type="math/tex">\pi=[q(s)]_1</script> – note that this is still only one group element.</p>
<p>Now, to check this, the verifier will also have to compute the interpolation polynomial <script type="math/tex">I(X)</script> and the zero polynomial <script type="math/tex">Z(X)</script>. Using this, they can compute <script type="math/tex">[Z(s)]_2</script> and <script type="math/tex">[I(s)]_1</script>, and thus verify the pairing equation</p>
<script type="math/tex; mode=display">\displaystyle
e(\pi,[Z(s)]_2) = e(C-[I(s)]_1, H)</script>
<p>By writing out the equation in the pairing group, we can easily verify it checks out the same way the single point Kate proof does:</p>
<script type="math/tex; mode=display">\displaystyle
[q(s)\cdot Z(s)]_T = [p(s)-I(s)]_T</script>
<p>This is actually really cool: You can prove any number of evaluation – even a million – by providing just one group element! That’s only 48 bytes to prove all these evaluations!</p>
<h2 id="kate-as-a-vector-commitment">Kate as a vector commitment</h2>
<p>While the Kate commitment scheme is designed as a polynomial commitment, it actually also makes a really nice vector commitment. Remember that a vector commitment commits to a vector <script type="math/tex">a_0, \ldots, a_{n-1}</script> and lets you prove that you committed to <script type="math/tex">a_i</script> for some <script type="math/tex">i</script>. We can reproduce this using the Kate commitment scheme: Let <script type="math/tex">p(X)</script> be the polynomial that for all <script type="math/tex">i</script> evaluates as <script type="math/tex">p(i)=a_i</script>. We know there is such a polynomial, and we can for example compute it using Lagrange interpolation:
<script type="math/tex">\displaystyle
p(X) = \sum_{i=0}^{n-1} a_i \prod_{j=0 \atop j \neq i}^{n-1} \frac{X-j}{i-j}</script></p>
<p>Now using this polynomial, we can prove any number of elements in the vector using just a single group element! Note how much more efficient (in terms of proof size) this is compared to Merkle trees: A Merkle proof would cost <script type="math/tex">\log n</script> hashes to even just prove one element!</p>
<h2 id="further-reading">Further reading</h2>
<p>We are currently exploring the use of Kate commitments in order to achieve a stateless version of Ethereum. As such, I highly recomment searching for <a href="https://ethresear.ch/search?q=kate">Kate</a> in the ethresearch forums to find interesting topics of current research.</p>
<p>Another great read from here is Vitalik’s <a href="https://vitalik.ca/general/2019/09/22/plonk.html">introduction to PLONK</a>, which makes heavy use of polynomial commitments and the Kate scheme is the primary way this is instantiated.</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p><a href="https://www.iacr.org/archive/asiacrypt2010/6477178/6477178.pdf">https://www.iacr.org/archive/asiacrypt2010/6477178/6477178.pdf</a> <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>This result is often mis-quoted as fundamental theorem of algebra. But the fundamental theorem of algebra is actually the inverse result (only valid in algebraically closed fields), that over the complex numbers, every polynomial of degree <script type="math/tex">n</script> has <script type="math/tex">n</script> linear factors. The simpler result here unfortunately doesn’t come with a short catchy name, despite arguable being more fundamental to Algebra. <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:3">
<p><a href="https://www.cs.cmu.edu/~goyal/ibe.pdf">https://www.cs.cmu.edu/~goyal/ibe.pdf</a> <a href="#fnref:3" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>KZG polynomial commitmentsData availability checks2019-12-20T14:50:38+00:002019-12-20T14:50:38+00:00https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks<h1 id="primer-on-data-availability-checks">Primer on data availability checks</h1>
<p>This post is an attempt to explain data availability checks, and why they are needed in scaling solutions for blockchains, such as Ethereum 2.0. I assume a basic background in blockchains (Bitcoin, Ethereum) and ideally some background in the consensus algorithms that are used (Proof of Work and Proof of Stake). For simplicity, I will be describing this for a Proof of Stake chain in which all full nodes run the consensus protocol with equal weights and a <script type="math/tex">2/3</script> honesty assumption; but it applies the same to Proof of Work and other protocols.</p>
<h2 id="setting-the-scene">Setting the scene</h2>
<p>Think of a blockchain that has full nodes and light clients, as well as a peer to peer network that may be lossy but does not adaptively censor data. Light clients are a cheaper alternative to full nodes. In traditional blockchain protocols, we assume all clients are running full nodes that verify every transaction in a state transition. Running a full node requires a machine with large amounts of memory, computational power and bandwidth. This cost can be too high for mobile clients and many resource-constrained environments.</p>
<p>A light client is a node that only downloads the header from each block and trusts the full nodes to check that the state transitions are correct – and assumes that the consensus algorithm would not produce a chain that violates this. Light clients rely on fulll nodes to provide the information inside the blocks for any relevant transaction. This is likely to be a very small percentage of the total data on chain.</p>
<p>So for the purposes of this introduction, we will have three categories actors:</p>
<p><strong>Full nodes</strong> produce a chain via consensus on each block and always download all data as well as verifying the state. Whenever they see a block with an inconsistent state (i.e. the final state of the block is not consistent with the transactions in that block), they produce a fraud proof to alert the light clients of this.</p>
<p><strong>Light clients</strong> only download block headers (not transaction data and state), except for the transactions and parts of the state they are interested in. They are connected to full nodes to request the data they need.</p>
<p>The <strong>peer to peer network</strong> distributes block headers and allows random access to chunks of data that were uploaded to it.</p>
<p>A full node has the following security guarantees:</p>
<ul>
<li>A consensus (super-)majority of the other full nodes can create an alternative chain, and thus perform a double spending attack; more generally, they can create alternative versions of history with transactions arbitrarily reordered</li>
<li>Since it is are checking the state, even a supermajority of the other full nodes agreeing on an inconsistent state can never get an honest full node to agree to this chain.</li>
</ul>
<p>So for a full node the security assumption is that <script type="math/tex">2/3</script> of full nodes are honest to guarantee that transactions will not be reordered, but no honesty assumption whatsoever is needed to ensure correct state execution (a full node simply can never be tricked into accepting an incorrect state transition).</p>
<p>For light clients, things are slightly different, because they don’t download and verify the state. The naïve light client without fraud proofs (which we’ll come to below) can thus be tricked into believing that a chain is good by a supermajority (<script type="math/tex">2/3</script>) of the full nodes, even if it actually has an incorrect state transition.</p>
<h2 id="fraud-proofs">Fraud proofs</h2>
<p>Fraud proofs are a way to give the light client a better security model, which is closer to that of the full node. The aim is that the light clients will also be protected from invalid chains, as long as there is at least one honest full node (a much weaker assumption than a <script type="math/tex">2/3</script> majority).</p>
<p>How do fraud proofs achieve this? Let’s assume that the blockchain executes <script type="math/tex">n</script> transactions <script type="math/tex">t_1, \ldots, t_n</script> in the block <script type="math/tex">B</script> with block header <script type="math/tex">H</script>. If we add an execution trail that stores the Merkle root of the state before and after each transaction, let’s call it <script type="math/tex">s_0, \ldots, s_n</script>, then a fraud proof can be construced if any of the transactions was executed incorrectly (i.e. its effect was not correctly applied to the state): If say <script type="math/tex">t_i</script> is the faulty transaction, then giving the triplet <script type="math/tex">(s_{i-1}, t_i, s_i)</script>, together with Merkle proofs showing their inclusion in <script type="math/tex">H</script>, will constitute a fraud proof. In fact, we only need to include the parts of <script type="math/tex">s_{i-1}</script> and <script type="math/tex">s_i</script> that <script type="math/tex">t_i</script> needs or affects. This fraud proof is much smaller than the original block <script type="math/tex">B</script> and can thus be easily broadcasted in the network to warn light clients not to follow this chain.</p>
<p>So now we have a security assumption for light clients that’s much better than previously:</p>
<ul>
<li><script type="math/tex">2/3</script> of dishonest full nodes can create an alternative chain, and thus change history/reorder transaction (enabling, for example, a double spending attack)</li>
<li>But in order to prevent an incorrect state transition, the assumption is now that there is at least <strong>one</strong> honest full node (which will create a fraud proof) and that the network is synchronous (so that you receive that fraud proof in time)</li>
</ul>
<h2 id="data-availability-problem">Data availability problem</h2>
<p>There is one gap in the solution of using fraud proofs to protect light clients from incorrect state transitions: What if a supermajority of the full nodes has signed a block header, but will not publish some of the data (in particular, it could be fraudulent transactions that they will publish later to trick someone into accepting printed/stolen money)? The honest full nodes, obviously, will not follow this chain, as they can’t download the data. But the light clients will not know that the data is not available since they don’t try to download the data, only the header. So we are in a situation where the honest full nodes know that something fishy is going on, but they have no means of alerting the light clients, as they are missing the piece of data that might be needed to create a fraud proof.</p>
<p>Couldn’t they just alert the light clients with another kind of message and tell them “hey, watch out, the data in this block is not available”? Yes, but the problem is that they can’t prove it: There is no proof of data being unavailable, so the simple fraud proof mechanism from above does not work.</p>
<p>What’s even worse is that it’s not an attributable fault. It is possible that some data gets lost due to bad networking conditions, and that data might reappear later. So if you are an honest node that sees a data unavailability alarm, and when you check you see that the data is actually there, you cannot for sure know who is at fault: It may be that the producer did not upload the data in the beginning, but only after the alert was created (producer fault), or it could be that it was a false alarm.</p>
<p>Because it’s not an attributable fault, we cannot punish either the producer or the challenger as a consequence of the alarm. That’s annoying because it basically means adding this functionality creates a DOS vector (Vitalik wrote down a very nice description of this problem <a href="https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding">here</a>).</p>
<h1 id="the-solution-data-availability-checks-using-erasure-codes">The solution: data availability checks using erasure codes</h1>
<p>The solution to this conundrum is to make sure that light clients can know whether the data is actually available. Because if they know that the data is available, they know there will likely be a honest full node who has seen and checked it – and broadcasted a fraud proof it it’s incorrect/fraudulent.</p>
<p>Of course we don’t want the light clients to have to download the full chain and state to do this – then they wouldn’t be light clients anymore. So instead we will let them download random chunks of data and check that they are available. If you try downloading 100 different chunks of data, and you get all of them, you can be pretty sure most of the data is available (e.g., if less than <script type="math/tex">50\%</script> were available, the probability of you successfully downloading 100 chunks would be <script type="math/tex">2^{-100}\approx 10^{-30}</script>, an incredibly small number).</p>
<p>However, it only proves that <strong>most</strong> data is available – let’s say a mere 100 bytes in a 10 megabyte block of data are missing: The chances that you will request something from exactly that bit of data are slim. And one hundred bytes are enough to hide an evil transaction from the honest fraud provers.</p>
<p>So we need to do something to the data to make sure that those checks actually ensure that <strong>all</strong> the data will be available. This can be done using <strong>erasure codes</strong>. An erasure code replaces the block data <script type="math/tex">B</script> with a larger amount of data <script type="math/tex">E</script> with the property that some fixed percentage <script type="math/tex">% <![CDATA[
q<1 %]]></script> will always be enough to reconstruct the whole data. So even if some of the data is missing, as long as light clients can make sure a large enough fraction is available, they know that <script type="math/tex">B</script> can be reconstructed.</p>
<p>Now we are ready to define the behaviour of light clients in the presence of data availability checks. For every block header they download, they will now try to download <script type="math/tex">k</script> random chunks of <script type="math/tex">E</script> in order to assess whether the data is actually available. If they can download all of those chunks, then with probability <script type="math/tex">1-q^k</script> there is actually enough data to reconstruct the whole block on the network.</p>
<p>Using this mechanism, there is no need for full nodes to “alert” the light clients of data not being available. By just downloading a small amount of data, they can test it for themselves and know.</p>
<h1 id="example-for-erasure-codes-reed-solomon-codes">Example for erasure codes: Reed-Solomon codes</h1>
<p>How do we actually construct erasure codes? A simple and well-known example are Reed-Solomon codes. They are based on the simple fact that any polynomial of degree <script type="math/tex">d</script> over a field is uniquely determined by its evaluation at <script type="math/tex">d+1</script> points. For example, if the polynomial is of degree 1 (a line), then knowing the polynomial at two points is sufficient to know the whole polynomial (there is only one line going through two distinct points).</p>
<p>To work with polynomials we have to work over a <a href="https://en.wikipedia.org/wiki/Finite_field">finite field</a>, as otherwise the coefficients and evaluations could become arbitrarily large. Luckily, there are fields of any size <script type="math/tex">2^m</script> available (the so-called binary fields or Galois fields over <script type="math/tex">\mathbb{F}_2</script>), so that we don’t have to work over some prime field <script type="math/tex">\mathbb{F}_p</script> (though we may have to do that for other reasons in certain schemes).</p>
<p>So let’s say that we have <script type="math/tex">n</script> chunks of data <script type="math/tex">d_0, \ldots, d_{n-1}</script> which we want to erasure code. In order to do this with a Reed-Solomon code, we will interpolate a polynomial
<script type="math/tex">\displaystyle f(x) = \sum_{i=0}^{n-1}a_i x^i</script> of degree <script type="math/tex">d=n-1</script> that evaluates to <script type="math/tex">d_0</script> at <script type="math/tex">0</script>, i.e. <script type="math/tex">f(0)=d_0</script>, <script type="math/tex">f(1)=d_1</script>, and so on. We know such a polynomial exists, and in fact the <a href="https://en.wikipedia.org/wiki/Lagrange_polynomial">Lagrange interpolation polynomials</a> give us an explicit way to construct it (there are more efficient ways, though).</p>
<p>We now “extend” the data by evaluating the polynomial at some more points – for example <script type="math/tex">n</script> more points if we want to set the rate <script type="math/tex">q=0.5</script>. Thus <script type="math/tex">d_n = f(n), d_{n+1} = f(n+1), \ldots, d_{2n-1} = f(2n-1)</script>. We thus have the property that any <script type="math/tex">n</script> points will be sufficient to reconstruct the polynomial – and if we have the polynomial <script type="math/tex">f(x)</script>, we can also easily get our original data by just evaluating it at <script type="math/tex">0, \ldots, n-1</script>.</p>
<p>That’s all! Reed-Solomon codes are nothing more than some polynomial interpolation. This would in fact be the end of the story for data availability, as they are optimal in terms of coding efficiency – except for one small problem: There is another way in which fraud can happen, which is producing an incorrect encoding. And for Reed-Solomon codes, in order to prove that an encoding is incorrect, you have to provide <script type="math/tex">n</script> chunks of data, sufficient to interpolate a polynomial through <script type="math/tex">n-1</script> of them and showing that the last one is not on this polynomial. That’s why we’re currently doing lots of research on finding ways to either avoid having to do those incorrect encoding proofs or making them as small as possible.</p>
<h1 id="application-to-sharding">Application to sharding</h1>
<p>Data availability checks are important for many different blockchain scaling solutions, because allow to provide security to nodes even if they are unable to check all or even download all the data. Since this is a fundamental bottleneck of blockchains (consensus nodes having to download all data), that’s an important requirement for scaling.</p>
<p>For example, in Ethereum 2.0, validators will only fully validate the beacon chain, and the shards will be validated by committees. The point of this construction is to relieve the validators from having to validate everything. However, this means that the validators are actually light clients on most shards (except for the ones they are doing active work on). Thus, data availability checks are needed. In this case, the Ethereum 2.0 validators are actually “full nodes” and light clients at the same time. The kind of nodes that really download <strong>all</strong> shards and check them are called <strong>supernodes</strong> – these are likely only run by organizations or people staking so much that they would validate on all shards. And we definitely don’t want to just trust that this small minority has to be honest in order to run Ethereum 2.0.</p>
<p>Hence, it is absolutely essential to have data availability checks and fraud proofs so that normal people can run validators.</p>
<h1 id="further-reading">Further reading</h1>
<ol>
<li>
<p>Vitalik Buterin’s explanation of fraud proofs and erasure codes <a href="https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding">here</a></p>
<p>It introduces the multi-dimensional Reed-Solomon codes as a way to enable smaller proofs of incorrect encoding</p>
<p>Paper version <a href="https://arxiv.org/pdf/1809.09044.pdf">here</a></p>
</li>
<li>
<p>Latest ideas around alternatives to the multi-dimensional codes:</p>
<ul>
<li><a href="https://ethresear.ch/t/stark-proving-low-degree-ness-of-a-data-availability-root-some-analysis/6214">Using STARKs</a></li>
<li><a href="https://ethresear.ch/t/fri-as-erasure-code-fraud-proof/6610">Using FRIs</a></li>
<li><a href="https://ethresear.ch/t/an-alternative-low-degreeness-proof-using-polynomial-commitment-schemes/6649">Using Kate’s polynomial commitment scheme</a></li>
</ul>
</li>
</ol>Primer on data availability checksHi!2019-07-12T15:12:38+00:002019-07-12T15:12:38+00:00https://dankradfeist.de/update/2019/07/12/welcome-to-jekyll<p>Maybe I will post some content here at somepoint, or maybe I won’t.</p>
<p>But if I do, it will have awesome formulas set with MathJax:</p>
<script type="math/tex; mode=display">\sum_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6}</script>Maybe I will post some content here at somepoint, or maybe I won’t.