# Verkle trie for Eth1 state

This post is a quick summary on how verkle tries work and how they can be used in order to make Eth1 stateless. Note that this post is written with the KZG commitment scheme in mind as it is easy to understand and quite popular, but this can easily be replaced by any other “additively homomorphic” commitment scheme, meaning that it should be possible to compute the commitment to the sum of two polynomials by adding the two commitments.

## Using KZG as a Vector commitment

The KZG (Kate) polynomial commitment scheme is a polynomial commitment scheme. Its primary functionality is the ability to commit to a polynomial $f(x)$ via an elliptic curve group element $C = [f(s)]_1$ (for notation see the linked post). We can then open this commitment at any point $z$ by giving the verifier the value $y=f(z)$ as well as a group element $\pi = [(f(s) - y)/(s-z)]_1$, and this proof of the correctness of the value $y$ can be checked using a pairing equation.

A vector commitment is a commitment scheme that takes as an input $d$ different values $v_0, v_1, \ldots, v_{d-1}$ and produces a commitment $C$ that can be opened at any of these values. As an example, a Merkle tree is a vector commitment, with the property that opening at the $i$-th value required $\log d$ hashes as a proof.

Let $\omega$ be a primitive $d$-th root of unity, i.e. $\omega^d=1$, and $\omega^i \not=1$ for $% $.

We can turn the Kate commitment into a vector commitment that allows commiting to a vector of length $d$ by committing to the degree $d-1$ polynomial $f(x)$ that is defined by 1

To open the commitment $C$ at any point $i$, we simply have to compute a Kate proof for $f(\omega^i)$. Fortunately, this proof is constant sized: It does not depend on the width $d$. Even better, many of these proofs can be combined into a single proof, which is much cheaper to verify.

## Introduction to Verkle tries

Verkle is an amalgamation of “vector” and “Merkle”, due to the fact that they are built in a tree-like structure just like Merkle trees, but at each node, instead of a hash of the $d$ nodes below ($d=2$ for binary Merkle trees), they commit to the $d$ nodes below using a vector commitment. $d$-ary Merkle trees are inefficient, because each proof has to include all the unaccessed siblings for each node on the path to a leaf. A $d$-ary Merkle tree thus needs $(d - 1) \log_d n = (d - 1) \frac{\log n}{\log d}$ hashes for a single proof, which is worse than the binary Merkle tree, which only needs $\log n$ hashes. This is because a hash function is a poor vector commitment: a proof requires all siblings to be given.

Better vector commitments change this equation; by using the KZG polynomial commitment scheme as a vector commitment, each level only requires a constant size proof, so the annoying factor of $d-1$ that kills $d$-ary Merkle trees disappears.

A verkle trie is a trie where the inner nodes are $d$-ary vector commitments to their children, where the $i$-th child contains all nodes with the prefix $i$ as a $d$-digit binary number. As an example here is a $d=16$ verkle trie with nine nodes inserted:

The root of a leaf node is simply a hash of the (key, value) pair of 32 byte strings, whereas the root of an inner node is the hash of the vector commitment (in KZG, this is a $G_1$ element).

## Verkle proof for a single leaf

We assume that key and value are known (they have to be provided in any witness scheme). Then for each inner node that the key path crosses, we have to add the commitment to that node to the proof. For example, let’s say we want to prove the leaf 0101 0111 1010 1111 -> 1213 in the above example (marked in green), then we have to give the commitment to Node A and Node B (both marked in cyan), as the path goes through these nodes. We don’t have to give the Root itself because it is known to the verifier. The Root as well as the node itself are marked in green, as they are data that is required for the proof, but is assumed as given and thus is not part of the proof.

Then we need to add a KZG proof for each inner node, that proves that the hash of the child is a correct reveal of the KZG commitment. So the proofs in this example would consist of three KZG evaluation proofs:

• Proof that the root (hash of key and value) of the node 0101 0111 1010 1111 -> 1213 is the evaluation of the commitment of Inner node B at the index 1010
• Proof that the root of Inner node B (hash of the KZG commitment) is the evaluation of the commitment of Inner node A at the index 0111
• Proof that the root of Inner node A (hash of the KZG commitment) is the evaluation of the Root commitment at the index 0101

But does that mean we need to add a Kate proof at each level, so that the complete proof will consist of $\log_d n - 1$ elliptic curve group elements for the commitments [Note the -1 because the root is always known and does not have to be included in proofs] and an additional $\log_d n$ group elements for the reveals?

Fortunately, this is not the case. KZG proofs can be compressed using different schemes to a small constant size, so given any number of inner nodes, the proof can be done using a small number of bytes. Even better, given any number of leaves to prove, we only need this small size proof to prove them alltogether! So the amortized cost is only the total size of the commitments of the inner nodes. Pretty amazing.

In practice, we want a scheme that is very efficient to compute and verify, so we use this scheme. It is not the smallest in size (but still pretty small at 128 bytes total), however it is very efficient to compute and check.

## Average verkle trie depth

My numerical experiments indicate that the average depth (number of inner nodes on the path) of a verkle tree with $n$ random keys inserted is $\log_d n + 2/3$.

For $n=2^{30}$ and $d=2^{10}$, this results in an average trie depth of ca. $3.67$.

## Attack verkle trie depth

An attacker can attempt to fill up the siblings of an attacked key in order to lengthen the proof path. They only need to insert one key per level in order to maximise the proof size; for this, however, they have to be able to find hash prefix collisions. Currently it is possible to find prefix collisions of up to 80 bits, indicateing that with $d=2^{10}$, up to 8 levels of collisions can be provoked. Note that this is only about twice longer compared to the average expected depth for $n=2^{30}$ keys, so the attack doesn’t do very much overall.

1. Note that we could use $f(i) = v_i$ instead, which would seem more intuitive, but this convention allows the use of Fast Fourier Transforms in computing all the polynomials, which is much more efficient.