中文版本: 内积证明

Introduction

You might have heard of Bulletproofs: It’s a type of zero knowledge proof that is used for example by Monero, and that does not require a trusted setup. The core of this proof system is the Inner Product Argument 1, a trick that allows a prover to convince a verifier of the correctness of an “inner product”. An inner product is the component by component product of two vectors:

where and .

One interesting case is where we set the vector to be the powers of some number , i.e. . Then the inner product becomes the evaluation of the polynomial

at .

Inner Product Arguments work on Pedersen Commitments. I have previously written about KZG commitments, and Pedersen commitments are similar in that the commitment is in an elliptic curve. However a difference is that they do not require a trusted setup. Here is a comparison of the KZG commitment scheme and using Pedersen combined with an Inner Product Argument as a Polynomial Commitment Scheme (PCS):

  Pedersen+IPA KZG
Assumption Discrete log Bilinear group
Trusted setup No Yes
Commitment size 1 Group element 1 Group element
Proof size 2 log n Group elements 1 Group element
Verification O(n) group operations 1 Pairing

Basically, compared to KZG commitments, our commitment scheme is less efficient. Proofs are larger (), which wouldn’t be the end of the world as logarithmic is still very small. But unfortunately, the verifier has to do a linear amount of work, so they are not succinct. This makes them impractical for some applications. However in some cases this can be worked around.

  • One example is my writeup on multiopenings. In this case, the trick is that you can aggregate many openings into a single one.
  • The Halo system 2, where the linear cost of many openings is aggregated

In both of these examples, the trick is to amortize many openings. If you only want to open a single polynomial, then it’s tough and you have to incur the full cost, though.

However, the big advantage is that Pedersen and Inner Product Arguments come with much fewer assumptions, in particular a pairing is not needed and they don’t require a trusted setup.

Pedersen commitments

Before we can discuss Inner Product Arguments, we need to discuss the data structure that they operate on: Pedersen commitments. In order to use Pedersen commitments, we need an elliptic curve . Let’s quickly remind ourselves what you can do in an elliptic curve (I will use additive notation because I think it is the more natural one):

  1. You can add two elliptic curve elements and :
  2. You can multiply an element with a scalar , where is the curve order of (i.e. the number of elements):

There is no way to compute the “product” of two curve elements: the operation “” is not defined, so you cannot compute “”; as opposed to multiplying by a scalar; so , for example, is easy to compute.

Another important property is that there is no efficient algorithm to compute “discrete logarithms”. The meaning of this is that given and with the property that , if you don’t know it is computationally infeasible to find . We call the discrete logarithm of with respect to .

Pedersen commitments make use of this infeasibility to construct a commitment scheme. Let’s say you have two points and and their discrete logarithm with respect to each other (i.e. the such that ) is unknown, then we can commit to two numbers :

is an element of the elliptic curve .

To reveal the commitment, the prover gives the verifier the numbers and . The verifier computes and if it matches will accept.

The central property of a commitment scheme is that it is binding. So given , could a cheating prover come up with such that the verifier will accept them, i.e. such that but with ?

If someone can do this, then they could also find the discrete logarithm. Here is why: We know that , and by regrouping the terms on both sides of the equation we get

Either or have to be not equal to zero. Let’s say it’s , then we get:

for . Thus we’ve found . Since we know this is a hard problem, in practice no attacker can perform this.

This means it’s computationally infeasible for an attacker to find alternative to reveal for the commitment . (They definitely do exist, they are just computationally infeasible to find – similar to finding a collision for a hash function).

We can generalize this and commit to a vector, i.e. a list of scalars . We just need a “basis”, i.e. an equal number of group elements that don’t have known discrete logarithms between them. Then we can compute the commitment

This gives us a vector commitment, although with quite a bad complexity: In order to reveal any element, all elements of the vector have to be revealed. But there is one redeeming property: The commitment scheme is additively homomorphic. This means that if we have another commitment , then it’s possible to just add the two commitments to get a new commitment to the sum of the two vectors and :

Thanks to this additive homomorphic property, this vector commitment actually turns out to be useful.

Inner Product Argument

The basic strategy of the Inner Product Argument is “divide and conquer”: Take the problem and instead of completely solving it, turn it into a smaller one of the same type. At some point, it becomes so small that you can simply reveal everything and prove that the instance is correct.

At each step, the problem size halves. This ensures that after steps, the problem is reduced to size one, so it can be proved trivially.

The idea is that we want to prove that a commitment is of the form

where and as well as are our “basis”, i.e. they are group elements in and none of their discrete logarithms with respect to each other are known. We also introduced the new notation for a product between a vector of scalars () and another vector of group elements (), and it is defined as

So essentially, we are proving that is a commitment to

  • a vector with basis
  • a vector with basis and
  • their inner product with respect to the basis .

This in itself does not seem very useful – in most applications we want the verifier to know , and not just have it hidden in some commitment. But this can be remedied with a small trick which I will come to below.

The argument

We want the prover to convince the verifier that is of the form . As I mentioned before, instead of doing this outright, we will only reduce the problem by computing another commitment in such a way that if the property holds for , then it also holds for .

In order to do this, the prover and the verifier play a little game. The prover commits to certain properties, after which the verifier sends a challenge, which leads to the next commitment . Describing it as a game does not mean the proof has to be interactive though: The Fiat-Shamir construction allows us to turn interactive proofs into non-interactive one, by replacing the challenge with a collision-resistant hash function of the commitments.

Statement to prove

The commitment has the form with respect to the basis given by . We call the fact that has this form the “Inner Product Property”.

Reduction step

Let

The prover computes

where we’ve defined as the “left half” of the vector and the “right half” and analogously for .

Then the prover computes the following commitments:

and send them to the verifier. Then the verifier sends the challenge (when using the Fiat-Shamir construction to make this non-interactive, this means that would be the hash of and ). The prover uses this to compute the updated vectors

which have half the length.

Now the verifier computes the new commitment:

as well as the updated basis

Now, if the new commitment has the property that it is of the form – then the commitment fulfills the originall claim.

All the vectors have halved in size – so we have achieved something. From here we replace , and and repeat this step.

I will below go through the maths on why this works, but Vitalik also made a nice visual representation that I recommend to get an intuition.

Final step

When we repeat the step above, we will reduce by a factor of two each time. At some point, we will encounter . At this point we don’t repeat the step anymore. Instead the prover will send and , which in fact are now only a single scalar each. Then the verifier can simply compute

and accept the statement if this is indeed equal to , or reject if it is not.

Correctness and soundness

Above I claimed that if has the desired form, then it follows that also has it. I now want to show why this is the case. In order to do this, we need to look at two things:

  • Correctness – i.e. given a prover who follows the protocol, can they always convince the verifier that the statement is correct; and
  • Soundness – i.e. a dishonest prover cannot convince the verifier of an incorrect statement, except with a very small probability.

Let’s start with correctness. This assumes that the prover is doing everything according to the protocol. Since the prover is following the protocol, we know that with respect to the basis given by . We need to show that then .

The verifier computes .

So in order for the commitment to have the Inner Product Property, we need to verify that . This is true because

This concludes the proof of correctness. Now in order to prove soundness, we need the property that a prover can’t start with a commitment that does not fulfill the Inner Product Property and end up with a that does by going through the reduction step.

So let’s assume that the prover committed to for some . If we go through the same process as before, we find

So now let’s assume that the prover managed to cheat, and thus fulfills the Inner Product Property. That means that

Expanding the left hand side, we get

Note that the prover can choose and freely, so we cannot assume that they will be according to the above definitions.

Multiplying by and moving everything to one side we get a quadratic equation in :

Unless all the terms are zero, this equation has at most two solutions . But the verifier chooses after the prover has already committed to their values , and . The probability that the prover can successfully cheat is thus extremely small; we typically choose the field to be of size ca. , so the probability that the verifier chooses a value for such that this equation holds, when the values were not chosen according to the protocol, is vanishingly small.

This concludes the soundness proof.

Only compute basis changes at the end

The verifier has to do two things each round: Compute the challenge and compute the updates bases and . However, updating the basis at every round is inefficient. Instead the verifier can simply keep track of the challenge values , , up to that they will encounter during the rounds.

Let’s call the basis after round . The elements and are scalars (or vectors of length one) because we end the protocol once our vectors have reached length one. Computing from is a multiscalar multiplication (MSM) of length . The scalar factors for are the coefficients of the polynomial

and the scalar factors for are given by

Using Inner Product Arguments to evaluate polynomials

For our main application – evaluating a polynomial defined by at a point – we want to make some small additions to this protocol.

  • Most importantly, we want to know the result , and not just that has the “Inner Product Property”
  • is known to the verifier. We can thus make things a bit easier by removing it from the commitment

How to construct the commitment

If we want to verify a polynomial evaluation for the polynomial , then we are typically working from a commitment . The prover would send the verifier the evaluation .

So it seems like the verifier can just compute the initial commitment , since they know , and start the protocol.

But not so fast. In most cases, will be a commitment that is generated by the prover. A malicious prover could cheat by, for example, committing to . In this case, they would be able to prove that , because they have effectively shifted the result.

To prevent this, we need to do a small change to the protocol. After receiving the commitment and the evaluation , the verifier generates a scalar and rescales the basis . Afterwards the protocol can proceed as usual. Because the prover can’t predict what is going to be, they can’t succeed (except with very small probability) at manipulating the result to be something other then .

Note that we also need to stop the prover from manipulating the vector if what we want is a generic inner product – but for a polynomial evaluation, we can simply get rid of that part alltogether so I won’t go into the details.

How to get rid of the second vector

Note that the verifier knows the vector if what we want is to compute a polynomial evaluation. Given the challenges they can simply compute the final result using the same technique as demonstrated in “compute the basis change at the end”.

We can thus remove the second vector from all commitments and simply compute . This means the verifier has to be able to compute the final version from the initial vector . Since the folding process for is the same as that for the basis vector , the coefficients of the previously defined polynomial will define the linear combination, in other words .

Creating an IPA for a polynomial in coefficient form

So far, we have used an Inner Product Argument to evaluate a polynomial that is committed to by its coefficients, which are the for a polynomial defined by . However, often we want to work with a polynomial that is defined using its evaluations on a domain . Since any polynomial of degree less than is uniquely defined by the evaluations these two are completely equivalent. However, transforming between the two can be computationally expensive: it costs operations if the domain admits an efficient Fast Fourier Transform, and otherwise it’s .

To avoid this cost, we try to simply never change to coefficient form. This can be done by changing the commitment to by committing to the evaluations instead of the coefficients:

This means that our vector in the IPA is now given by

The barycentric formula allows us now to compute an IPA to evaluate a polynomial using this new commitment. It says that

If we choose the vector to be

we get that , and thus an IPA with this vector can be used to prove the evaluation of a polynomial which is itself in evaluation form. Other than this, the strategy is exactly the same.