Verkle trie for Eth1 state
Verkle trie for Eth1 state
This post is a quick summary on how verkle tries work and how they can be used in order to make Eth1 stateless. Note that this post is written with the KZG commitment scheme in mind as it is easy to understand and quite popular, but this can easily be replaced by any other “additively homomorphic” commitment scheme, meaning that it should be possible to compute the commitment to the sum of two polynomials by adding the two commitments.
Using KZG as a Vector commitment
The KZG (Kate) polynomial commitment scheme is a polynomial commitment scheme. Its primary functionality is the ability to commit to a polynomial via an elliptic curve group element (for notation see the linked post). We can then open this commitment at any point by giving the verifier the value as well as a group element , and this proof of the correctness of the value can be checked using a pairing equation.
A vector commitment is a commitment scheme that takes as an input different values and produces a commitment that can be opened at any of these values. As an example, a Merkle tree is a vector commitment, with the property that opening at the -th value required hashes as a proof.
Let be a primitive -th root of unity, i.e. , and for .
We can turn the Kate commitment into a vector commitment that allows commiting to a vector of length by committing to the degree polynomial that is defined by 1
To open the commitment at any point , we simply have to compute a Kate proof for . Fortunately, this proof is constant sized: It does not depend on the width . Even better, many of these proofs can be combined into a single proof, which is much cheaper to verify.
Introduction to Verkle tries
Verkle is an amalgamation of “vector” and “Merkle”, due to the fact that they are built in a tree-like structure just like Merkle trees, but at each node, instead of a hash of the nodes below ( for binary Merkle trees), they commit to the nodes below using a vector commitment. -ary Merkle trees are inefficient, because each proof has to include all the unaccessed siblings for each node on the path to a leaf. A -ary Merkle tree thus needs hashes for a single proof, which is worse than the binary Merkle tree, which only needs hashes. This is because a hash function is a poor vector commitment: a proof requires all siblings to be given.
Better vector commitments change this equation; by using the KZG polynomial commitment scheme as a vector commitment, each level only requires a constant size proof, so the annoying factor of that kills -ary Merkle trees disappears.
A verkle trie is a trie where the inner nodes are -ary vector commitments to their children, where the -th child contains all nodes with the prefix as a -digit binary number. As an example here is a verkle trie with nine nodes inserted:
The root of a leaf node is simply a hash of the (key, value) pair of 32 byte strings, whereas the root of an inner node is the hash of the vector commitment (in KZG, this is a element).
Verkle proof for a single leaf
We assume that key and value are known (they have to be provided in any witness scheme). Then for each inner node that the key path crosses, we have to add the commitment to that node to the proof. For example, let’s say we want to prove the leaf 0101 0111 1010 1111 -> 1213
in the above example (marked in green), then we have to give the commitment to Node A
and Node B
(both marked in cyan), as the path goes through these nodes. We don’t have to give the Root
itself because it is known to the verifier. The Root
as well as the node itself are marked in green, as they are data that is required for the proof, but is assumed as given and thus is not part of the proof.
Then we need to add a KZG proof for each inner node, that proves that the hash of the child is a correct reveal of the KZG commitment. So the proofs in this example would consist of three KZG evaluation proofs:
- Proof that the root (hash of key and value) of the node
0101 0111 1010 1111 -> 1213
is the evaluation of the commitment ofInner node B
at the index1010
- Proof that the root of
Inner node B
(hash of the KZG commitment) is the evaluation of the commitment ofInner node A
at the index0111
- Proof that the root of
Inner node A
(hash of the KZG commitment) is the evaluation of theRoot
commitment at the index0101
But does that mean we need to add a Kate proof at each level, so that the complete proof will consist of elliptic curve group elements for the commitments [Note the -1 because the root is always known and does not have to be included in proofs] and an additional group elements for the reveals?
Fortunately, this is not the case. KZG proofs can be compressed using different schemes to a small constant size, so given any number of inner nodes, the proof can be done using a small number of bytes. Even better, given any number of leaves to prove, we only need this small size proof to prove them alltogether! So the amortized cost is only the total size of the commitments of the inner nodes. Pretty amazing.
In practice, we want a scheme that is very efficient to compute and verify, so we use this scheme. It is not the smallest in size (but still pretty small at 128 bytes total), however it is very efficient to compute and check.
Average verkle trie depth
My numerical experiments indicate that the average depth (number of inner nodes on the path) of a verkle tree with random keys inserted is .
For and , this results in an average trie depth of ca. .
Attack verkle trie depth
An attacker can attempt to fill up the siblings of an attacked key in order to lengthen the proof path. They only need to insert one key per level in order to maximise the proof size; for this, however, they have to be able to find hash prefix collisions. Currently it is possible to find prefix collisions of up to 80 bits, indicateing that with , up to 8 levels of collisions can be provoked. Note that this is only about twice longer compared to the average expected depth for keys, so the attack doesn’t do very much overall.
-
Note that we could use instead, which would seem more intuitive, but this convention allows the use of Fast Fourier Transforms in computing all the polynomials, which is much more efficient. ↩