Jekyll2023-08-08T00:20:15+00:00https://dankradfeist.de/feed.xmlDankrad FeistResearcher at Ethereum FoundationSUMCHECK Quickie2023-08-08T00:00:00+00:002023-08-08T00:00:00+00:00https://dankradfeist.de/ethereum/2023/08/08/sumcheck-quickie<h1 id="the-sumcheck-protocol">The SUMCHECK protocol</h1>
<p>I won’t delve deeply into its introduction here, but the SUMCHECK protocol (Carsten Lund, Lance Fortnow, Howard J. Karloff, and Noam Nisan. “Algebraic Methods for Interactive Proof Systems”, 1992) serves as a foundation for many “verifiable computation” primitives. If you’re reading this, you likely already know its significance.</p>
<p>Let <script type="math/tex">\mathbb{F}</script> be a finite field and <script type="math/tex">f(X_1, X_2, \ldots, X_n)</script> be a multivariate polynomial of degree <script type="math/tex">d</script> over <script type="math/tex">\mathbb{F}</script>. Consider the sum <script type="math/tex">S</script> defined as:</p>
<script type="math/tex; mode=display">S = \sum_{x_1\in\{0, 1\}} \sum_{x_2\in\{0, 1\}} \cdots \sum_{x_n\in\{0, 1\}} f(x_1, x_2, \ldots, x_n) = \sum_{(x_1, x_2, \ldots, x_n) \in \{0,1\}^n} f(x_1, x_2, \ldots, x_n)</script>
<p>The SUMCHECK protocol provides an <script type="math/tex">n</script> round interactive protocol to verify the correctness of <script type="math/tex">S</script>, using only a single evaluation of <script type="math/tex">f</script> (i.e. the verifier only has to evaluate <script type="math/tex">f(X)</script> once, rather than <script type="math/tex">2^n</script> times).</p>
<p>The protocol advances through <script type="math/tex">n</script> rounds. In each round, the prover sends a univariate polynomial to the verifier. In response, the verifier conducts a check and responds by sending a single field element challenge.</p>
<p><strong>Round 1</strong></p>
<p>The prover computes and shares the univariate polynomial:</p>
<script type="math/tex; mode=display">f_1(X) = \sum_{(x_2, \ldots, x_n) \in \{0,1\}^{n-1}} f(X, x_2, x_3, \ldots, x_n)</script>
<p>The verifier ensures that the degree of <script type="math/tex">f_1</script> is less than <script type="math/tex">d</script>, checks if <script type="math/tex">f_1(0) + f_1(1) = S</script>, and issues the challenge <script type="math/tex">r_1 \in \mathbb{F}</script>.</p>
<p><strong>Round 2</strong></p>
<p>The prover computes and shares the univariate polynomial:</p>
<script type="math/tex; mode=display">f_2(X) = \sum_{(x_3, \ldots, x_n) \in \{0,1\}^{n-2}} f(r_1, X, x_3, x_4, \ldots, x_n)</script>
<p>The verifier ensures that the degree of <script type="math/tex">f_2</script> is less than <script type="math/tex">d</script>, checks if <script type="math/tex">f_2(0) + f_2(1) = f_1(r_1)</script>, and issues the challenge <script type="math/tex">r_2 \in \mathbb{F}</script>.</p>
<p><strong>Round <script type="math/tex">i</script></strong></p>
<p>The prover computes and shares the univariate polynomial:</p>
<script type="math/tex; mode=display">f_i(X) = \sum_{(x_{i+1}, \ldots, x_n) \in \{0,1\}^{n-i}} f(r_1, \ldots, r_{i-1}, X, x_{i+1}, x_{i+2}, \ldots, x_n)</script>
<p>The verifier ensures that the degree of <script type="math/tex">f_i</script> is less than <script type="math/tex">d</script>, checks if <script type="math/tex">f_i(0) + f_i(1) = f_{i-1}(r_{i-1})</script>, and issues the challenge <script type="math/tex">r_i \in \mathbb{F}</script>.</p>
<p><strong>Round <script type="math/tex">n</script></strong></p>
<p>The prover computes and shares the univariate polynomial:</p>
<script type="math/tex; mode=display">f_n(X) = f(r_1, \ldots, r_{n-1}, X)</script>
<p>The verifier ensures that the degree of <script type="math/tex">f_n</script> is less than <script type="math/tex">d</script>, checks if <script type="math/tex">f_n(0) + f_n(1) = f_{n-1}(r_{n-1})</script>, and issues the challenge <script type="math/tex">r_n \in \mathbb{F}</script>.</p>
<p><strong>Final check</strong></p>
<p>The verifier evaluates <script type="math/tex">f(r_1, \ldots, r_n)</script> and confirms that it equals <script type="math/tex">f_n(r_n)</script>.</p>
<h1 id="why-does-it-work">Why does it work</h1>
<p>Below, I present an intuitive proof for the efficacy of the SUMCHECK protocol, assuming a large field <script type="math/tex">\mathbb{F}</script>. For this explanation, the Schwartz-Zippel lemma is essential.</p>
<h2 id="the-schwartz-zippel-lemma">The Schwartz-Zippel Lemma</h2>
<p>The Schwartz-Zippel lemma states: For a non-zero multivariate polynomial <script type="math/tex">P(X_1, \ldots, X_n)</script> with a total degree of at most <script type="math/tex">d</script> over the field <script type="math/tex">\mathbb{F}</script>, when evaluating <script type="math/tex">P</script> at random points <script type="math/tex">r_1, \ldots, r_n</script> from <script type="math/tex">\mathbb{F}</script>, the likelihood of <script type="math/tex">P(r_1, \ldots, r_n)</script> being zero is less than <script type="math/tex">\frac{d}{\lvert\mathbb{F}\rvert}</script>.</p>
<p>In large fields (like the scalar fields of elliptic curves often, which are cryptographically secure with field sizes around <script type="math/tex">2^{256}</script>), this probability is minuscule. This is because a polynomial can only have a limited number of zeros. For a univariate polynomial, this is immediately evident from the “factor theorem”: Every zero of the polynomial corresponds to a linear factor, so a degree <script type="math/tex">d</script> polynomial can have up to <script type="math/tex">d</script> zeros.</p>
<p>A common application of the Schwartz-Zippel lemma is comparing two univariate polynomials, <script type="math/tex">f(X)</script> and <script type="math/tex">g(X)</script>, to determine their identity. By generating a random number <script type="math/tex">r \in \mathbb{F}</script> and checking if <script type="math/tex">f(r) = g(r)</script>, we can determine if they are the same. This method is <em>correct</em>, but is it also <em>sound</em>? What’s the likelihood of erroneously claiming <script type="math/tex">f(X)</script> and <script type="math/tex">g(X)</script> are identical when they aren’t?</p>
<p>Claim: If <script type="math/tex">f(X) \not= g(X)</script>, then the probability of the check being successful is at most <script type="math/tex">\frac{d}{\lvert\mathbb{F}\rvert}</script>.</p>
<p>Proof: Let’s set <script type="math/tex">P(X) = f(X) - g(X)</script>. Given that <script type="math/tex">f(X) \not= g(X)</script>, <script type="math/tex">P(X)</script> isn’t the zero polynomial. As per Schwartz-Zippel, the chance of <script type="math/tex">P(r) = f(r) - g(r) = 0</script> is at most <script type="math/tex">\frac{d}{\lvert\mathbb{F}\rvert}</script>.</p>
<p>A succinct takeaway from Schwartz-Zippel is: Over large fields, two “low-degree” polynomials are either identical or different almost everywhere.</p>
<h2 id="sumcheck-protocol-proof">SUMCHECK protocol proof</h2>
<p>The SUMCHECK protocol harnesses this property to convert a sum with a potentially large number of terms into a singular evaluation. Let’s see why it works, by trying to create a prover that attempts to cheat the protocol, attempting to prove a sum <script type="math/tex">\tilde S \not= S</script>.</p>
<p><strong>Round 1</strong></p>
<p>The prover needs to send a polynomial <script type="math/tex">\tilde f_1(X)</script> with the property <script type="math/tex">\tilde f_1(0) + \tilde f_1(1) = \tilde S</script>. Any other choice would make the first check fail immediately, so there would be no point.</p>
<p>Since <script type="math/tex">\tilde S \not= S</script>, the prover cannot send the honest polynomial <script type="math/tex">f_1(X)</script>. Recall that this means that the polynomial they will be sending is different almost everywhere from <script type="math/tex">f_1(X)</script>; more precisely, it is the same as <script type="math/tex">f_1</script> on at most <script type="math/tex">d</script> points.</p>
<p>The verifier then sends the challenge <script type="math/tex">r_1</script>.</p>
<p><strong>Round 2</strong></p>
<p>In round 2, the prover needs to send a polynomial <script type="math/tex">\tilde f_2(X)</script> with the property that <script type="math/tex">\tilde f_2(0) + \tilde f_2(1) = \tilde f_1(r_1)</script>.</p>
<p>Now there are two possibilities:</p>
<ul>
<li><script type="math/tex">\tilde f_1(r_1)= f_1(r_1)</script>. In this case the prover has hit the jackpot: He can simply send <script type="math/tex">f_2(X)</script> (the “honest” answer) and continue the protocol honestly to the end. He will succeed at cheating. However, the probability for this happening is only <script type="math/tex">\frac{d}{\mathbb{F}}</script>.</li>
<li><script type="math/tex">\tilde f_1(r_1) \not= f_1(r_1)</script> (which is overwhelmingly likely). In this case, the prover is in the same situation as before: He cannot send the honest <script type="math/tex">f_2(X)</script>, but only a malicious <script type="math/tex">\tilde f_2(X)</script> with <script type="math/tex">\tilde f_2(0) + \tilde f_2(1) = \tilde f_1(r_1)</script>, which can coincide with the honest answer on at most <script type="math/tex">d</script> points.</li>
</ul>
<p>This repeats every round:</p>
<p><strong>Round <script type="math/tex">i</script></strong></p>
<p>In round <script type="math/tex">i</script>, the prover needs to send a polynomial <script type="math/tex">\tilde f_i(X)</script> with the property that <script type="math/tex">\tilde f_i(0) + \tilde f_i(1) = \tilde f_{i-1}(r_{i-1})</script>.</p>
<p>Again, two possibilities:</p>
<ul>
<li><script type="math/tex">\tilde f_{i-1}(r_{i-1}) = f_{i-1}(r_{i-1})</script>. Prover wins (probability of this happening <script type="math/tex">\frac{d}{\mathbb{F}}</script>).</li>
<li><script type="math/tex">\tilde f_{i-1}(r_{i-1}) \not= f_{i-1}(r_{i-1})</script>. Prover needs to send <script type="math/tex">\tilde f_i(X)</script> with <script type="math/tex">\tilde f_i(0) + \tilde f_i(1) = \tilde f_{i-1}(r_{i-1})</script></li>
</ul>
<p><strong>Final check</strong></p>
<p>Finally, in the last round, the verifier checks if <script type="math/tex">f(r_1, \ldots, r_n)=f_n(r_n)</script>. Assuming the prover didn’t “win” in one of the rounds before, he send a malicious <script type="math/tex">\tilde f_n(X)</script> which coincides with the honest <script type="math/tex">f_n(r_n)</script> on at most <script type="math/tex">d</script> points, so the probability that the final check will pass is <script type="math/tex">\frac{d}{\mathbb{F}}</script>.</p>
<p>As you can see from the above analysis, the prover has a small probability of “winning” and cheating the verifier in each round: If his polynomial happens to have the same evaluation on the challenge field element as the honest version, the prover can continue the protocol like an honest prover and the verification will succeed.</p>
<p>However, assuming that the field <script type="math/tex">\mathbb{F}</script> is large, the probability for this is small. Since there are <script type="math/tex">n</script> rounds, at best, the prover can succeed with probability <script type="math/tex">\frac{nd}{\lvert\mathbb{F}\rvert}</script>.</p>The SUMCHECK protocolRAI – one of the coolest experiments in crypto2023-01-31T10:30:00+00:002023-01-31T10:30:00+00:00https://dankradfeist.de/ethereum/2023/01/31/rai-crypto-experiment<p><em>Special thanks to Vitalik Buterin and Ameen Soleimani for feedback and review.</em></p>
<h1 id="rai--one-of-the-coolest-experiments-in-crypto">RAI – one of the coolest experiments in crypto</h1>
<p>I think <a href="https://reflexer.finance">RAI</a> is one of the coolest experiments in crypto right now. So I thought I’d write my version of an explainer for it, from the perspective that I have introduced in my previous article on <a href="/ethereum/2021/09/27/stablecoins-supply-demand.html">Supply and demand for stablecoins</a>. Back when I wrote it, my understanding of RAI was poor. The version of DAI it describes (single-collateral DAI, before the introduction of custodial stablecoins as collateral) is actually very close to RAI, however there is an interesting difference: Instead of applying an interest rate to balances, like DAI does, RAI directly manipulates the redemption price (which is always 1 USD for DAI). I think it’s nice to directly describe this mechanism. If you want to understand more on how Collateralized Debt Positions (CDPs) work to maintain stability, then I still recommend reading my previous article!</p>
<h1 id="why-is-rai-floating-and-not-tracking-one-currency">Why is RAI floating, and not tracking one currency?</h1>
<p>In the past, the goal of creating stablecoins was seen as creating an asset that is always worth 1 USD (or some other currency). But as Vitalik remarked in <a href="https://vitalik.ca/general/2022/05/25/stable.html">his thought experiments on automated stablecoins</a>, if you can create a coin that is always worth 1 USD, why can’t you use the same mechanism to create one that is worth 1 USD plus 20% interest per year (i.e. 1.00 USD in year 1, 1.20 USD in year 2, 1.44 USD in year 3, and so on)? After all, the only way the blockchain knows about prices is through oracles, and it’s an easy change to the oracle to make it return the value of the coin priced in this new unit (USD appreciating by 20% per year) instead of USD.</p>
<p>Clearly there is something missing in the picture. As we will see below, in order to balance supply and demand, a fully decentralized stablecoin needs to be able to give incentives to those going long (using the stablecoin) and going short (supplying the stablecoin) in some form. This is true whether it tracks USD, USD+20% interest or USD-5% interest.</p>
<p>One way of doing this is to add a mechanism rate that charges interest on debt (the suppliers of the stablecoin) and credits it to the holders (the users of the stablecoin). The interest rate can however be negative when there is more demand for holding than there is for stablecoin debt.</p>
<p>In March 2020, DAI first depegged upward (the market price increased to more than 1 USD) and only repegged after USDC (a custodial, centralized stablecoin pegged to 1 USD) was added as one of the forms of collateral to mint DAI, otherwise it would have required a negative interest rate. Since its inception, RAI has mostly had negative interest rates. For now, it seems like decentralized stablecoin require negative interest rates most of the time.</p>
<p>When interest rates are negative, instead of having your balance change from 1 to 0.99 to 0.98, RAI keeps the balance the same and changes the actual price target of the stablecoin instead. This means that RAI looks like a floating currency, but with the property that it is much less volatile than cryptos like Ether and Bitcoin.</p>
<h2 id="the-stablecoin-problem">The stablecoin problem</h2>
<p>Cryptocurrencies are volatile. Apart from scaling, this is probably still the largest barrier to adoption. This is why there have been many attempts to create a coin that is less volatile.</p>
<p>Like any commodity, the price of a stablecoin is determined by supply and demand. At any instant, some people want to buy and sell the coin, and these inflows and outflows must be matched, so the price will adjust until they do (the price where they match is the market price). Market makers will try to cover short term spikes in supply or demand, but will adjust their quote prices if they see consistent pressure in one direction.
<img src="/assets/supply_demand.png" alt="Supply and demand" /></p>
<p>So if you want to keep a coin stable, you have to be able to somehow manipulate supply and demand such that they cross at a desired price. If the current price is too high, it’s easy to create more supply and push the price down. The trouble comes when the coin falls below the desired price (more outflows than inflows): We need to either decrease supply or increase demand, but how do we do that if the supply comes from independent holders wanting to sell?</p>
<p>There is only one decentralized and sustainable option that I know of. It requires saving during the good times in order to be able to create demand in the bad times: In order for new stablecoins to be created, enough collateral for has to be added to the protocol, so that when demand decreases, this collateral can be used to generate new demand.</p>
<h3 id="collateralized-debt-positions">Collateralized debt positions</h3>
<p>Creating stablecoins by means of Collateralized Debt Positions (CDPs) is a way in which this can be implemented. A CDP is a position where a holder of a volatile currency, such as Ether, takes out a loan in the stablecoin. The CDP represents this position. It can also be seen as a leverage position in the collateral. For example, this graph represents a CDP that has 200 RAI borrowed against 1 ETH, where the value of 1 ETH is currently 1400 USD and 1 RAI is 3 USD; the holder of the CDP gets the “equity” value of this position (currently 1400-600=800 USD, but it can fluctuate with the price), and the RAI holder the debt (which is independent of the current price of Ether).</p>
<p><img src="/assets/debt-equity-rai.png" alt="RAI CDP" /></p>
<p>How can CDPs create demand? Some protocols do this directly by allowing stablecoin holders to redeem against the collateral, this is for example how <a href="https://www.liquity.org/">Liquity</a> works. However, RAI follows MakerDAO’s original DAI in not integrating such a mechanism. But CDPs can still generate demand:</p>
<ol>
<li>While the CDP is well collateralized, charging an interest rate to the debt holder can incentivize them to take action in relation to their CDP. For example, if the interest rate on debt increases, a CDP holder may decide that holding on to the position is no longer worth it, and that it’s better to repay the debt. When they do this, they have to buy the stablecoin on the market, which creates demand.</li>
<li>Once the CDP gets close to the liquidation ratio, the holder is incentivized to close the position to avoid the liquidation penalty, unless they can add more collateral. If the position gets liquidated, the liquidator will also have to buy stablecoins in order to bid for the collateral.</li>
</ol>
<p>By only ever issuing new stablecoins in the form of debt when a CDP is created, the protocol has all the collateral in the CDPs to prop up the coin when other demand for stablecoin collapses.</p>
<p>This construction comes with a counterintuitive downside: New coins can only be created when someone is willing to take out a CDP. This requires someone who wants to take a leveraged position in the collateral.</p>
<p>This demand is currently the limiting factor for stablecoins based on this construction. In order to stop the stablecoin from increasing in value due to limited supply of willing CDP holders (in other words, demand for leverage), we will have to do one of two things:</p>
<ol>
<li>Make the leveraged position more attractive to the CDP holders</li>
<li>Make holding the stablecoin less attractive</li>
</ol>
<p>What we can do is that we charge the stablecoin holders a negative interest rate, which is paid out to the CDP holders. This actually does both: It increases the attractiveness of leveraged positions, and makes holding the stablecoin less attractive.</p>
<p>Margin exchanges have done this for a while: They too have to find this balance, as every long position has to be matched to a short position, so that the net exposure is equal to the deposited assets. They use the same mechanism to balance the books: The funding rate is paid by the type of position for which there is more demand to the side that for which there is less.</p>
<h2 id="how-rai-balances-supply-and-demand">How RAI balances supply and demand</h2>
<p>We have just learned that one mechanism to achieve the balance between CDPs (stablecoin shorts) and holders (stablecoin longs) is an interest rate transfer between the two. DAI implements this mechanism using the DAI savings rate: you can put your DAI into the savings contract and you get paid an interest.</p>
<p>Things become more awkward when the interest rate is negative, i.e. DAI holders are paying to CDP holders. In this regime, DAI balances would have to be slowly decreasing. Implementing it in this way has the advantage that your balance always represents the value in USD, and 1 DAI remains worth 1 USD. It’s less good for smart contract developers who now have to deal with the fact that balances in an account can decrease.</p>
<p>Instead, RAI goes a different way: Adjust the “redemption price” to represent the interest rate. What’s the redemption price? It’s the target value of 1 RAI. In particular it is used</p>
<ol>
<li>To borrow RAI in CDPs and to repay debt, as well as determine whether a position is underwater and should be liquidated</li>
<li>As the value at which all debts and deposits are settled when global settlement is triggered.</li>
</ol>
<p>Since the interest rate is applied to the redemption price, it is called redemption rate. As an example, if the redemption rate is -3% and the redemption price is currently 1.00 USD, then in 1 year the redemption price will be 0.97 USD (RAI actually started with a redemption price of 3.14 USD).</p>
<p>Now when such a negative redemption rate is applied, two things will happen:</p>
<ol>
<li>RAI holders will expect to have 3% less value when compared to holding USD after one year</li>
<li>RAI borrowers (who buy RAI back on the market) will expect their debt to decrease in value by 3% after one year</li>
</ol>
<h3 id="how-does-rai-determine-the-redemption-rate">How does RAI determine the redemption rate</h3>
<p>Another cool component of RAI is that the redemption rate is actually automatically computed by the protocol. The protocol detects the supply and demand imbalance by tracking the deviation of the market price from the redemption price. If the market price is higher than the redemption price, it means there is more demand for RAI than there is for CDPs – and so a negative redemption rate has to be applied. Conversely, if the market price is lower than the redemption price, the redemption rate needs to be be positive.</p>
<p>So a very simple design could look like this: Find the current difference from between the redemption price and the market price, multiply it by some number – for simplicitly say 1, and make that the redemption rate. Say the current redemption price is 4% under market, then the redemption rate will be -4%. If it is 10% above, the redemption rate will be +10%.</p>
<p>If we did this, it would constitute a P controller (P for proportional), which is actually what RAI did initially. RAI’s adjustment mechanism was later updated to use a <a href="https://en.wikipedia.org/wiki/PID_controller">PI controller</a> that takes the difference between market and redemption prices (the error) as an input. A PI controller, in addition to the current value, also uses the integral (I), so takes into consideration how much the value has deviated in the past. This makes the system more stable and means interest rates fluctuate a bit less with short term price changes.</p>
<p>The <a href="https://reflexer.finance">RAI website</a> shows the history of RAI redemption price and market price, as well as the interest rates, which can be a nice demonstration of how this mechanism works</p>
<p><img src="/assets/rai-rate-screenshot.png" alt="RAI rates" /></p>
<p>On top, you can see the market (red) and redemption prices (grey). The market price is typically above the redemption price, representing excess demand for RAI over CDPs, which the protocol compensates by applying a negative redemption rate – this is why the redemption price is slowly decreasing.</p>
<p>The lower graph shows how the redemption rate is computed. the blue curve (<code class="highlighter-rouge">p_rate</code>) is the P part of the PI controller. It is proportional to the error and indeed, the graph looks like the inverted difference between the red and grey curves in the upper graph. The orange curve (<code class="highlighter-rouge">i_rate</code>) is much smoother and represents the I part (integral) of the controller, which reacts to past deviations. The sum of the <code class="highlighter-rouge">p_rate</code> and the <code class="highlighter-rouge">i_rate</code> is the redemption rate and is how fast the redemption price is going down at any given time.</p>
<p>The higher the market price is above the redemption price, the faster the redemption price thus decreases – rebalancing supply and demand as the expected value of holding RAI decreases (and RAI debt becomes more attractive).</p>
<h2 id="but-what-pulls-rai-back-to-the-redemption-price">But what pulls RAI back to the redemption price</h2>
<p>There’s one thing we skipped over. The redemption price represents the target value of RAI in the protocol, and we’ve just been taking it for granted that lowering the price will make long RAI positions less attractive and short positions more attractive. But this assumes that market participants have some expectation of being able to use RAI at or near the redemption price – which requires some force that pulls the market at least in the direction of the redemption price, so that lowering the redemption price has a meaning.</p>
<p>Of course, we can expect “global settlement” will solve this: There is a mechanism in the protocol, which can be triggered by governance, that settles all deposits and debts according to the current redemption price. It is expected that this mechanism will be triggered when the deviations become too extreme. So maybe that’s the reason why the redemption price matters?</p>
<p>Actually, the global settlement is a cool emergency feature, but it is not necessary to explain why the market price will track the redemption price, assuming (some) rational market participants (with enough capital).</p>
<p>Let’s assume that market participants just ignore the redemption price entirely. What would happen?</p>
<ol>
<li>The current minimal collateralization for CDPs is 135%. What that means is that if the market price is more than 35% above the redemption price, anyone can just mint RAI for Ether and “forget” about their CDP – just sell the RAI, buy more Ether with it and take the arbitrage profit. RAI can’t trade significantly more than 35% above redemption price for this reason.</li>
<li>There is no strict bound like this from below – but we can do a thought experiment: Let’s say that RAI trades consistently 10% below redemption price. Note that this would lead to an enormous redemption rate of something like 240% per year (in the long term, when the integral term has had enough time to accumulate). CDP holders have to take this redemption rate into account – eventually they will get liquidated, when their collateralization ratio (which is computed using redemption price) reaches 135%. They thus have a strong incentive to buy RAI before this happens.</li>
<li>Similarly, we can find that if RAI trades 10% above the redemption price, the negative interest rate will reach something crazy like -70% (again in the long term, when the integral term has had enough time to accumulate this), which means there is a very strong incentive for RAI holders to get out before this happens. If they don’t, lots of newly minted RAI from new CDPs will eventually be available at the much lower redemption price.</li>
</ol>
<p>Combined, these forces mean that while the market price can deviate, it cannot deviate too far and too long from the redemption price.</p>
<h2 id="how-does-tracking-another-currency-change-rai">How does tracking another currency change RAI?</h2>
<p>An interesting question is: How would RAI be different if instead of tracking USD, it had been set up to track the Euro, the Chinese Yuan, or maybe the 6 months moving average of the Ether price?</p>
<p>To start with, we will do a thought experiment (one proposed by <a href="https://vitalik.ca/general/2022/05/25/stable.html">Vitalik</a>): What if RAI was set up to track USD + 20% (a version of the USD that comes with a 20% interest rate)? Let’s call this asset RAI-PONZI.</p>
<p>Obviously holding this asset seems really attractive and having debt in that asset much less so. The price of RAI-PONZI will keep rising as buyers want the high interest rates and there are few people available wanting to take out CDPs in RAI-PONZI.</p>
<p>As RAI-PONZI rises above the redemption price, the redemption rate will get more and more negative. It will reach -20%, which makes RAI-PONZI equivalent to USD. From there, it will likely go even further: Currently RAI’s redemption rate is about -10%, so I would expect RAI-PONZI to settle at -30% in current market conditions. At that point, it becomes equivalent to current RAI, so it makes sense that market participants would behave in this way, assuming the same risk tolerance of market participants.</p>
<p>This is actually nothing else than creating an “offset” in the redemption rate of +20%, and an equivalent price offset.</p>
<p>What can we learn from this? The long-term expected gains or losses of a currency do not impact how RAI behaves. If RAI were pegged to Turkish Lira, which seems to lose about 25% of its value compared to the USD every year, it would probably not behave too differently on long timescales. Let’s call this asset RAI-TRY.</p>
<p>Where RAI-TRY is different is on short timescales and unexpected shocks. If the Lira suddenly drops 20%, due to a black swan, then RAI-TRY will do so, too. The same goes for a sudden increase.</p>
<p>What exact currency is used as an input to the RAI oracles therefore probably does not matter that much. It is likely that most major currencies like EUR or GBP will result in a very similar asset, except that it will react slightly differently under market shocks. This is because any expectation in different performance will just be corrected by market particpiants (so if they expect that GBP will lose 1% per year vs USD, they will just correct for it by picking a different redemption rate).</p>
<h2 id="why-do-i-think-rai-is-such-a-cool-experiment">Why do I think RAI is such a cool experiment?</h2>
<p>There have been many attempts to solve the decentralized stablecoin problem. MakerDAO with DAI was probably the first that solved a major part of the puzzle – how to stop it from crashing to zero in a confidence crisis. However, it turned out that they had still missed one part, which is how to stop it from going up.</p>
<p>Finally, RAI came and added this missing piece in a slightly unexpected way – whilst many had been expecting a DAI with negative interest rate, doing it via the redemption rate adjustment is much more elegant. And at the same time, it allows us to learn a lot:</p>
<p>First and foremost, the point of a stablecoin is not to be pegged to USD. It is to provide an asset with low volatility. RAI does indeed solve this task, and has much lower volatility than the underlying collateral, Ether.</p>
<p>RAI therefore is something like a new currency, which is underlined by the fact that it doesn’t really matter that much which fiat currency is used for the oracles, as long as it is reasonably stable. In fact you can change the reference asset while the system is working without much of a problem.</p>
<p>Secondly, the current market structure dictates that if users want a decentralized stablecoin, they have to pay a “price for stability”, in the form of a slowly decaying price of RAI (vs the USD). This is because there is a lot of demand for stablecoins, and limited demand for leverage on decentralized assets like Ether. While it currently feels like this might be an eternal truth, it does not have to be. It could be that this balance tips in the favour of stablecoins again in a bull market, as demand for leverage increases. However, before that happens, it is likely that we would have to see MakerDAO shrug off all of its custodial stablecoin exposure (in order to get their funding rates off zero), which currently seems a long way off.</p>
<p>What I love about RAI is that it is a completely fair way to determine the “price of stability”, and also much cleaner than others. I have posited in my <a href="/ethereum/2021/09/27/stablecoins-supply-demand.html">past article on stablecoins</a> that the price of stability isn’t currently fairly determined – and that if it were, it may well be a negative interest rate. Many see inflation as a scourge, but the reality is that having a “guaranteed stable asset” as we implicitly expect currencies to be has to be paid for by someone. If that price were determined using a market like it is in RAI, what would it be?</p>
<p>At least in the decentralized world, we now have an answer, that the price is typically higher than the 2% inflation allowed for by many central banks. Obviously, the number of assets that can be used as collateral in decentralized stablecoins is small, and the result in the real world may well be different. After all, there are 100s of trillions of collateral available, compared to currently less than one trillion in crypto assets.</p>
<p>RAI is, in many ways, central banking in a pure form. We will probably learn many things from this experiment, and that already makes it a worthwhile project.</p>Special thanks to Vitalik Buterin and Ameen Soleimani for feedback and review.Ethereum Merge: Run the majority client at your own peril!2022-03-24T11:00:00+00:002022-03-24T11:00:00+00:00https://dankradfeist.de/ethereum/2022/03/24/run-the-majority-client-at-your-own-peril<p><em>Special thanks to Vitalik Buterin, Hsiao-Wei Wang and Caspar Schwarz-Schilling for feedback and review.</em></p>
<p><strong>TL;DR</strong>: For reasons of both safety and liveness, Ethereum has chosen a multi-client architecture. In order to encourage stakers to diversify their setups, penalties are higher for correlated failures. A staker running a minority client will thus typically only lose moderate amounts should their client have a bug, but running a majority client can incur a total loss. Responsible stakers should therefore look at the client landscape and choose a less popular client.</p>
<h2 id="why-do-we-need-multiple-clients">Why do we need multiple clients?</h2>
<p>There are arguments why a single client architecture would be preferable. Developing multiple clients incurs a substantial overhead, which is the reason why we haven’t seen any other blockchain network seriously pursue the multi-client option.</p>
<p>So why does Ethereum aim to be multi-client? Clients are very complex pieces of code and likely contain bugs. The worst of these are so called “consensus bugs”, bugs in the core state transition logic of the blockchain. One often quoted example of this is the so-called “infinite money supply” bug, in which a buggy client accepts a transaction printing arbitrary amounts of Ether. If someone finds such a bug and isn’t stopped before they get to the exit doors (i.e. making use of the funds by sending them through a mixer or to an exchange), it would massively crash the value of Ether.</p>
<p>If everyone runs the same client, stopping this requires manual intervention, because the chain, all smart contracts and exchanges will keep running as usual. Even a few minutes could be enough to execute a successful attack and sufficiently disperse the funds to make it impossible to roll back <em>only</em> the attacker’s transactions. Depending on the amount of ETH printed, the community would likely coordinate on rolling back the chain to before the exploit (after having identified and fixed the bug).</p>
<p>Now let’s have a look at what happens when we have multiple clients. There are two possible cases:</p>
<ol>
<li>
<p><strong>The client with the bug represents less than 50% of the stake.</strong> The client will produce a block with the transaction exploiting the bug, printing ETH. Let’s call this chain <strong>A</strong>.</p>
<p>However, the majority of stake running a non-faulty client will ignore this block, because it is <strong>invalid</strong> (to them the printing ETH operation is simply invalid). They will build an alternative chain <strong>B</strong> that does not contain the invalid block.</p>
<p>Since the correct clients are in the majority, chain <strong>B</strong> will accumulate more attestations. Hence, even the buggy client will vote for chain <strong>B</strong>; as a result chain <strong>B</strong> will accumulate 100% of the votes and chain <strong>A</strong> will die. The chain will continue as if the bug never happened.</p>
</li>
<li>
<p><strong>The majority of stake uses the buggy client.</strong> In this case, chain <strong>A</strong> will accumulate the majority of votes. But since <strong>B</strong> has less than 50% of all attestations, the offending client will never see a reason to switch from chain <strong>A</strong> to chain <strong>B</strong>. We will thus see a chain split.</p>
</li>
</ol>
<p><img src="/assets/chainsplit.png" alt="" /></p>
<p>Case 1 is the ideal case. It would most likely lead to a single orphaned block which most users wouldn’t even notice. Devs can debug the client, fix the bug, and everything is great. Case 2 is clearly less than ideal, but still a better outcome than if there’s only a single client – most people would very quickly detect that there is a chain split (you can do this automatically by running several clients), exchanges would quickly suspend deposits, Defi users could tread carefully while the split is resolved. Basically, compared to the single client architecture, this still gives us a big flashing red warning light that allows to protect against the worst outcomes.</p>
<p>Case 2 will be much worse if the buggy client is run by more than 2/3 of the stake, in which case it would be finalizing the invalid chain. More on that later.</p>
<p>Some people think a chain split is so catastrophic that in itself it is an argument for a single-client architecture. But note that the chain split only happened because of a bug in the client. With a single client, if you wanted to fix this and return the chain back to status quo ante, you would have to roll back to the block before the bug happened – that’s just as bad as the chain split! So as bad as a chain split sounds, in the case where there is a critical bug in a client, it’s actually a feature, not a bug. At least you can see that something is seriously wrong.</p>
<h2 id="incentivising-client-diversity-anti-correlation-penalties">Incentivising client diversity: anti-correlation penalties</h2>
<p>It is clearly good for the network if the stake is split across multiple clients, with the best case being each client owning less than 1/3 of the total stake. This will make it resilient against a bug in any individual client. But why would stakers care? If there aren’t any incentives by the network, it’s unlikely that they will take on the cost of switching to a minority client.</p>
<p>Unfortunately we can’t make rewards directly dependent on what client a validator runs. There is no objective way to measure this that can’t be spoofed.</p>
<p>However, you can’t hide when your client has a bug. And this is where anti-correlation penalties come in: The idea is that if your validator does something bad, then the penalty is higher if more validators make a mistake <strong>around the same time</strong>. In other words, you get punished for correlated failures.</p>
<p>In Ethereum, you can currently get slashed for two behaviours:</p>
<ol>
<li>Signing two blocks at the same height</li>
<li>Creating a pair of slashable attestations (<a href="https://blog.ethereum.org/2020/01/13/validated-staking-on-eth2-1-incentives/">surround or double votes</a>)</li>
</ol>
<p>When you get slashed, you don’t usually lose all your funds. At the time of this writing (Altair fork), the default penalty is actually quite small: You would only lose 0.5 ETH, or about 1.5% of your staked Ether (ultimately this will be increased to 1 ETH or 3%).</p>
<p>However, there is a catch: There is an additional penalty that is dependent on all other slashings that occur during the 4096 epochs (18 days) before and after your validator was slashed. You are further penalized by an amount that is proportional to the total amount slashed during this period.</p>
<p>This can be a much larger penalty than the initial penalty. Currently (Altair fork) it is set so that if more than half of the full staking balance got slashed during this period, then you will lose all your funds. Ultimately this will be set so that you will lose all of your stake if 1/3 of other validators got slashed. 1/3 was chosen because this is the minimum amount of the stake that has to equivocate in order to create a consensus failure.</p>
<p><img src="/assets/proportional-penalties.png" alt="" /></p>
<h3 id="the-other-anti-correlation-penalty-the-quadratic-inactivity-leak">The other anti-correlation penalty: The quadratic inactivity leak</h3>
<p>Another way a validator can fail is by being offline. Again there is a penalty for it, but its mechanism is very different. We do not call it slashing, and it’s usually small: Under normal operation, a validator that is offline is penalized by the same amount that they would be gaining if they were validating perfectly. At the time of this writing, this is 4.8% per year. It is probably not worth breaking a sweat if your validator is offline for a few hours or days, for example due to a temporary internet outage.</p>
<p>It becomes very different when more than 1/3 of all validators are offline. Then the beacon chain cannot finalize, which threatens a fundamental propoerty of the consensus protocol, namely liveness.</p>
<p>To restore liveness in a scenario like this the so-called “quadratic inactivity leak” kicks in. The total penalty amount raises quadratically with time if a validator continues being offline while the chain is not finalizing. Initially it is very low; after ~4.5 days, the offline validators will lose 1% of their stake. However, it increases to 5% after ~10 days and to 20% after ~21 days (these are Altair values, they will be doubled in the future).</p>
<p><img src="/assets/quadratic-leak.png" alt="" /></p>
<p>This mechanism is designed so that in the case of a catastrophic event that annihilates a large number of validator operations, the chain will eventually be able to finalize again. As the offline validators lose larger and larger parts of their stake, they will make up a smaller and smaller share of the total, and as their stake drops below 1/3, the remaining online validators gain the required 2/3-majority, allowing them to finalize the chain.</p>
<p>However, there is another case where this becomes relevant: In certain cases, validators cannot vote for the valid chain anymore because they accidentally locked themselves into an invalid chain. More on this below.</p>
<h1 id="how-bad-is-it-to-run-the-majority-client">How bad is it to run the majority client?</h1>
<p>In order to understand what the dangers are, let’s take a look at three failure types:</p>
<ol>
<li>Mass slashing event: Due to a bug, majority-client validators sign slashable attestations</li>
<li>Mass offline event: Due to a bug, all majority-client validators go offline</li>
<li>Invalid block event: Due to a bug, majority-client validators all attest to an invalid block</li>
</ol>
<p>There are other kinds of mass failures and slashings that can happen, but I’m restricting myself to those related to client bugs (the ones you should consider when choosing which client to run).</p>
<h2 id="scenario-1-double-signing">Scenario 1: Double signing</h2>
<p>This is probably the most feared scenario by most validator operators: A bug leading the validator client to sign slashable attestations. One example would be two attestations voting for the same target epoch, but with different payloads. Because it is a client bug, it’s not just one staker that is concerned, but all stakers that run this particular client. When the equivocations are detected, the slashings will be a bloodbath: All concerned stakers will lose 100% of their staked funds. This is because we are considering a majority client: If the stake of the concerned client were only 10%, then “only” about 20% of their stake would be slashed (in Altair; 30% with the final penalty parameters in place).</p>
<p>The damage in this case is clearly extreme, but I also think it is <em>extremely unlikely</em>. The conditions for slashable attestations are simple, and that’s why validator clients (VCs) were built to enforce them. The validator client is a small, well audited piece of software. A bug of this magnitude is unlikely.</p>
<p>We have seen some slashings so far, but as far as I know all of them where due to operator failures – almost all of them resulting from an operator running the same validator in several locations. Since these aren’t correlated, the slashing amounts are small.</p>
<h2 id="scenario-2-mass-offline-event">Scenario 2: Mass offline event</h2>
<p>For this scenario, we assume that the majority client has a bug, which when triggered, leads to a crash of the client. An offending block has been integrated into the chain, and whenever the client encounters that block, it goes offline, leaving it unable to participate any further in consensus. The majority client is now offline, so the inactivity leak kicks in.</p>
<p>Client developers will scramble to get things back together. Realistically within hours, at most in a few days, they will release a bug fix that will remove the crash.</p>
<p>In the meantime, stakers also have the option to simply switch to another client. As long as enough do this to get more than 2/3 of all validators online, the quadratic inactivity leak will stop. It is not unlikely that this will happen before there is a fix for the buggy client.</p>
<p>This scenario is not unlikely (bugs that lead to crashes are one of the most common types), but the total penalty would probably be less than 1% of the stake affected.</p>
<h2 id="scenario-3-invalid-block">Scenario 3: Invalid block</h2>
<p>For this scenario, we consider the case where the majority client has a bug that produces an invalid block, and also accepts it as valid – i.e. when other validators using the same client see the invalid block, they will consider it as valid, and hence attest to it.</p>
<p>Let’s call the chain that includes the invalid block chain <strong>A</strong>. As soon as the invalid block is produced, two things will happen:</p>
<ol>
<li>All correctly functioning clients will ignore the invalid block and instead build on the latest valid head producing a separate chain <strong>B</strong>. All correctly working clients will vote and build on chain <strong>B</strong>.</li>
<li>The faulty client considers both chain <strong>A</strong> and <strong>B</strong> valid. It will thus vote for whichever of the two it currently sees as the heaviest chain.</li>
</ol>
<p><img src="/assets/chainsplit.png" alt="" /></p>
<p>We need to distinguish three cases:</p>
<ol>
<li>
<p><strong>The buggy client has less than 1/2 of total stake.</strong> In this case, all correct clients vote and build on chain <strong>B</strong> eventually making it the heaviest chain. At this point even the buggy client will switch to chain <strong>B</strong>. Other than one or a few orphaned blocks, nothing bad will happen. This is the happy case, and why it is great to only have sub-majority clients.</p>
</li>
<li>
<p><strong>The buggy client has more than 1/2 and less than 2/3 of the stake.</strong> In this case, we will see two chains being built – <strong>A</strong> by the buggy client, and <strong>B</strong> by all other clients. Neither chain has a 2/3-majority and therefore they cannot finalize.
As this happens, developers will scramble to understand why there are two chains. As they figure out that there is an invalid block in chain <strong>A</strong>, they can proceed to fix the buggy client. Once it is fixed, it will recognize chain <strong>A</strong> as invalid. It will thus start building on chain <strong>B</strong>, which will allow it to finalize.
This is very disruptive for users. While hopefully the confusion between which chain is valid will be short and less than an hour, the chain probably won’t finalize for many hours, potentially even a day.
But for stakers, even the ones running the buggy client, the penalties would still be relatively light. They will receive the “inactivity leak” penalty for not participating in chain <strong>B</strong> while they were building the invalid chain <strong>A</strong>. However, since this is likely less than a day, we are talking of a penalty that’s less than 1% of the stake.</p>
</li>
<li>
<p><strong>The buggy client has more than 2/3 of the stake.</strong> In this case, the buggy client will not just build chain <strong>A</strong> – it will actually have enough stake to “finalize” it. Note that it will be the only client that will think that chain <strong>A</strong> is finalized. One of the conditions of finalization is that the chain is valid, and to all other correctly operating clients, chain <strong>A</strong> will be invalid.
However, due to how the Casper FFG protocol works, when a validator has finalized chain <strong>A</strong>, they can never take part in another chain that is in conflict with <strong>A</strong> without getting slashed, <em>unless that chain is finalized</em> (for anyone interested in the details, see <a href="#A2-Why-can%E2%80%99t-the-buggy-client-switch-to-chain-B-once-it-has-finalized-chain-A">Appendix 2</a>). So once chain <strong>A</strong> has been finalized, the validators running the buggy client are in a terrible bind: They have committed to chain <strong>A</strong>, but chain <strong>A</strong> is invalid. They cannot contribute to <strong>B</strong> because it hasn’t finalized yet. Even the bugfix to their validator software won’t help them – they have already sent the offending votes.
What will happen now is very painful: Chain <strong>B</strong>, which is not finalizing, will go into the quadratic inactivity leak. Over several weeks, the offending validators will leak their stake until enough has been lost so that <strong>B</strong> will finalize again. Let’s say they started off with 70% of the stake – then they would lose 79% of their stake, because this is how much they would need to lose in order to represent less than 1/3 of the total stake.
At this point, chain <strong>B</strong> will finalize again and all stakers can switch to it. The chain will be healthy again, but the disruption will have lasted weeks, and millions of ETH were destroyed in the process.</p>
</li>
</ol>
<p>Clearly, case 3 is nothing short of a catastrophe. This is why we are extremely keen not to have any client with more than 2/3 of the stake. Then no invalid block can ever be finalized, and this can never happen.</p>
<h2 id="risk-analysis">Risk analysis</h2>
<p>So how do we evaluate these scenarios? A typical risk analysis strategy is to evaluate the likelihood of an event happening (1 – extremely unlikely, 5 – quite likely) as well as the impact (1 – very low, 5 – catastrophic). The most important risks to focus on are those that score high on both metrics, represented by the product of impact and likelihood.</p>
<table>
<thead>
<tr>
<th>Scenario</th>
<th>Likelihood</th>
<th>Impact</th>
<th>Product (Impact * Likelihood)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Scenario 1</td>
<td>1</td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<td>Scenario 2</td>
<td>4</td>
<td>2</td>
<td>8</td>
</tr>
<tr>
<td>Scenario 3</td>
<td>3</td>
<td>5</td>
<td>15</td>
</tr>
</tbody>
</table>
<p>Looking at this, by far the highest priority is scenario 3. The impact when one client is in a 2/3 supermajority is quite catastrophic, and it is also a relatively likely scenario. To highlight how easily such a bug can happen, a bug of this sort happened recently on the Kiln testnet (see <a href="https://hackmd.io/@prysmaticlabs/HyZqgTA-c">Kiln testnet block proposal failure</a>). In this case, Prysm did detect that the block was faulty after proposing it, and did not attest to it. Had Prysm considered that block as valid, and this had happened on mainnet, then we would be in the catasrophic case described in case 3 of scenario 3 – because Prysm <a href="https://pools.invis.cloud/">currently has a 2/3 majority in mainnet</a>. So if you are currently running Prysm, there is a very real risk that you could lose all your funds and you should consider switching clients.</p>
<p>Scenario 1, which people are probably most worried about, received a relatively low rating. The reason for this is that I consider the likelihood of it happening to be quite low, because I think that the Validator Client software is very well implemented in all clients and it is unlikely to produce slashable attestations or blocks.</p>
<h1 id="what-are-my-options-if-i-currently-run-the-majority-client-and-im-worried-about-switching">What are my options, if I currently run the majority client and I’m worried about switching?</h1>
<p>Switching clients can be a major undertaking. It also comes with some risks. What if the slashing database is not properly migrated to the new setup? There might be a risk of getting slashed, which completely defeats the purpose.</p>
<p>There is another option that I would suggest to anyone who is worried about this. It is also possible to leave your validator setup exactly as it is (no need to take those keys out etc.) and only switch the beacon node. This is extremely low risk because as long as the validator client is working as intended, it will never double sign and thus cannot be slashed. Especially if you have large operations, where changing the validator client (or remote signer) infrastructure would be very expensive and might require audits, this may be a good option. Should the setup perform less well than expected, it can also be easily switched back to the original client or another minority client can be tried.</p>
<p>The nice thing is that you have very little to worry about when switching your beacon node: The worst thing it can do to you is to be temporarily offline. That’s because the beacon node itself can never produce a slashable message on its own. And you can’t end up in scenario 3 if you’re running a minority client, because even if you would vote for an invalid block, that block would not get enough votes to be finalized.</p>
<h1 id="how-about-the-execution-clients">How about the execution clients?</h1>
<p>What I have written above applies to the Consensus clients – Prysm, Lighthouse, Nimbus, Lodestar and Teku, of which at the time of writing, Prysm likely has a 2/3 majority on the network.</p>
<p>All of this applies in the same way to the execution client. Should Go-ethereum, likely to be the majority execution client after the merge, produce an invalid block, it could get finalized and thus cause the catastrophic failure described in scenario 3.</p>
<p>Luckily, we now have three other execution clients ready for production – Nethermind, Besu and Erigon. If you are a staker, I highly recommend running one of these. If you are running a minority client, the risks are very low! But if you run the majority client, you are at serious risk of losing all your funds.</p>
<h1 id="appendix">Appendix</h1>
<h2 id="a1-why-is-there-no-slashing-for-invalid-blocks">A1: Why is there no slashing for invalid blocks?</h2>
<p>In Scenario 3, we have to rely on the quadratic inactivity leak to punish validators for proposing and voting for an invalid block. That’s strange – why don’t we just punish them directly? It would be faster and less painful to watch.</p>
<p>There are actually two reasons why we don’t do this – one is that we currently can’t, but even if we could, we may well not do it:</p>
<ol>
<li>
<p>Currently, it is practically impossible to introduce a penalty (“slashing”) for invalid blocks. The reason for this is that neither the beacon chain nor the execution chain are currently “<a href="/ethereum/2021/02/14/why-stateless.html">stateless</a>” – i.e. in order to check whether a block is valid, you need a context (the “state”) that is 100s of MB (beacon chain) or GB (execution chain) large. This means there is no “concise proof” that a block is invalid. We need such a proof to slash a validator: The block that “slashes” a validator needs to include a proof that the validator has made an offence.
There are ways around this without having a stateless consensus, however it would involve much more complex constructions such as multi-round fraud proofs, such as Arbitrum is currently using for their rollup.</p>
</li>
<li>
<p>The second reason why we might not be that eager to introduce this type of slashing even if we could, is because producing invalid blocks is a much harder thing to protect against than the current slashing conditions. The current conditions are extremely simple and can be validated easily in a few lines of code by validator clients. This is why I consider scenario 1 above so unlikely – slashable messages have so far only been produced by operator failures, and I think that’s likely to remain the case.
Adding slashing for producing invalid blocks (or attesting to them) raises the risks for stakers. Now even those running minority clients could risk serious penalties.</p>
</li>
</ol>
<p>In summary, we are unlikely to see direct penalties for invalid blocks and/or attestations to them for the next few years.</p>
<h2 id="a2-why-cant-the-buggy-client-switch-to-chain-b-once-it-has-finalized-chain-a">A2: Why can’t the buggy client switch to chain B once it has finalized chain A?</h2>
<p>This section is for anyone who wants to understand in more detail why the buggy client can’t just switch back and has to suffer the horrendous inactivity leak. For this we have to look how Casper FFG finalization works.</p>
<p>Each attestation contains a source and a target checkpoint. A checkpoint is the first block of an epoch. If there is a link from one epoch to another which has a total of >2/3 of all stake voting for it (i.e., there are this many attestations with the first checkpoint as the “source” and the second checkpoint as the “target”), then we call this a “supermajority link”.</p>
<p>An epoch can be “justified” and “finalized”. These are defined as follows:</p>
<ol>
<li>Epoch 0 is justified</li>
<li>An epoch is justified if there is a supermajority link from a justified epoch.</li>
<li>An epoch X is finalized if (1) the epoch X is justified and (2) the next epoch is also justified, with the source of the supermajority link being epoch X</li>
</ol>
<p>Rule 3 is slightly simplified (there are more conditions under which an epoch can be finalized, but they aren’t important for this discussion). Now let’s come to the slashing conditions. There are two rules for slashing attestations. Both compare a pair of attestations V and W:</p>
<ol>
<li>They are slashable if the target of V and W is the same epoch (i.e. the same height), but they don’t vote for the same checkpoint (double vote)</li>
<li>They are slashable if V “jumps over” W. What this means as that (1) the source of V is earlier than the source of W and (2) the target of V is later than the target of W (surround vote)</li>
</ol>
<p>The first condition is obvious: It prevents simply voting for two different chains at the same height. But what does the second condition do?</p>
<p>Its function is to slash all validators that take part in finalizing two conflicting chains (which should never happen). To see why, let’s look at our scenario 3 again, in the worst case where the buggy client is in a supermajority (>2/3 of the stake). As it continues voting for the faulty chain, it will finalize the epoch with the invalid block, like this:</p>
<p><img src="/assets/invalid-finalization.png" alt="" /></p>
<p>The rounded boxes in this picture represent epochs, not blocks. The green arrow is the last supermajority link created by all validators. The red arrows are supermajority links that were only supported by the buggy client. Correctly working clients ignore the epoch with the invalid block (red). The first red arrow will justify the invalid epoch, and the second one finalizes it.</p>
<p>Now let’s assume that the bug has been fixed and the validators that finalized the invalid epoch would like to rejoin the correct chain <strong>B</strong>. In order to be able to finalize the chain, a first step is to justify epoch X:</p>
<p><img src="/assets/invalid-finalization-2.png" alt="" /></p>
<p>However, in order to participate in the justification of epoch X (which needs a supermajority link as indicated by the dashed green arrow), they would have to “jump over” the second red arrow – the one that finalized the invalid epoch. Voting for both of these links is a slashable offense.</p>
<p>This continues to be true for any later epoch. The only way it will get fixed is through the quadratic inactivity leak: As chain <strong>B</strong> grows, the locked out validators will leak their funds until chain <strong>B</strong> can be justified and finalized by the correctly working clients.</p>Special thanks to Vitalik Buterin, Hsiao-Wei Wang and Caspar Schwarz-Schilling for feedback and review.Exponential EIP-15592022-03-16T11:00:00+00:002022-03-16T11:00:00+00:00https://dankradfeist.de/ethereum/2022/03/16/exponential-eip1559<h1 id="exponential-eip-1559-explainer">Exponential EIP-1559 explainer</h1>
<p>In this blog post I will try to help understand how the exponential version of EIP-1559 works – the one that was suggested for the <a href="https://notes.ethereum.org/@vbuterin/blob_transactions">Shard Blob EIP</a>.</p>
<p>I’m not going to try to explain how the EIP-1559 mechanism works – good explainers already exist, for example <a href="https://barnabe.substack.com/p/congestion-control-and-EIP-1559">by Barnabé Monnot</a>.</p>
<h2 id="linear-eip-1559-mechanics-original-version">Linear EIP-1559 mechanics (“original version”)</h2>
<p>I will call the current implementation of <a href="https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1559.md">EIP-1559</a> the “linear version” of EIP-1559.</p>
<p>In the linear version, we define the constants</p>
<script type="math/tex; mode=display">T = 15{,}000{,}000 \text{ (Gas target)}\\
A = 8 \text{ (Max base fee change denominator)}</script>
<p>Each block <script type="math/tex">B_i</script> has a base fee of <script type="math/tex">b_i</script> and total gas consumed in the block of <script type="math/tex">g_i</script>. There is an update rule for the Basefee:</p>
<script type="math/tex; mode=display">b_{i+1} = b_i \cdot \left(1+ \frac{1}{A}\frac{g_i - T}{T}\right)</script>
<p>There is also a constraint that the maximum amoung of gas per block can’t be more than <script type="math/tex">2 T</script>. However, this limit is not important in the scope of this post so I will ignore it.</p>
<h2 id="exponential-eip-1559">Exponential EIP-1559</h2>
<p>One way to understand this “linear EIP-1559” better is to compute what happens after many updates. In particular, how does <script type="math/tex">b_n</script> depend on <script type="math/tex">b_0</script>? By substituting the equation into itself many times, we get</p>
<script type="math/tex; mode=display">b_{n} = b_0 \prod_{i=0}^{n-1} \left(1+ \frac{1}{A}\frac{g_i - T}{T}\right)</script>
<p>Now let’s say that <script type="math/tex">A</script> is a large number, so that all the terms of the form <script type="math/tex">\frac{1}{A}\frac{g_i - T}{T}</script> are small? Let’s call <script type="math/tex">x_i = \frac{g_i - T}{T}</script>.</p>
<p>If we assume this, we can use an approximation: The exponential function <script type="math/tex">e^x \approx 1+x</script> for small <script type="math/tex">x</script>. We use this approximation in the inverse to replace <script type="math/tex">1+\frac{x_i}{A} = e^{\frac{x_i}{A}}</script> to get</p>
<script type="math/tex; mode=display">b_{n} = b_0 \prod_{i=0}^{n-1} \left(1+ \frac{x_i}{A}\right) \approx b_0 \prod_{i=0}^{n-1} e^\frac{x_i}{A} = b_0 \exp\left(\frac{1}{A}\sum_{i=0}^{n-1}x_i\right)</script>
<p>For the last step, we have used the property that <script type="math/tex">e^x \cdot e^y = e^{x+y}</script>. Note that</p>
<script type="math/tex; mode=display">\frac{1}{A}\sum_{i=0}^{n-1}x_i = \frac{1}{TA}\sum_{i=0}^{n-1}(g_i - T) = \frac{1}{TA}\left(\sum_{i=0}^{n-1}g_i - nT\right)</script>
<p>so we get this new formula for <script type="math/tex">b_n</script></p>
<script type="math/tex; mode=display">b_{n} = b_0 \exp\left(\frac{1}{TA}\left(\sum_{i=0}^{n-1}g_i - nT\right)\right)= b_0 \exp\left(\frac{1}{TA}\left(G_n - T_n\right)\right)</script>
<p>where we used <script type="math/tex">G_n = \sum_{i=0}^{n-1}g_i</script> for the total gas used since block 0 and <script type="math/tex">T_n = nT</script> the cumulative gas target.</p>
<p>This is the exponential form of EIP-1559. The only thing we need to keep track of to compute the basefee current <script type="math/tex">b_n</script> is (1) the total gas used, which is <script type="math/tex">G_n = \sum_{i=0}^{n-1}g_i</script>, and the total gas target, which is represented by <script type="math/tex">T_n = nT</script> (so it’s enough to just count the number of blocks).</p>
<h2 id="analyzing-the-exponential-form">Analyzing the exponential form</h2>
<p>Here is a question I have heard many times about exponential EIP-1559: Isn’t the goal of EIP-1559 that in the long term, the total gas used <script type="math/tex">G_n</script> equals to the gas target <script type="math/tex">T_n</script>?</p>
<p>And if so, if you evaluate the equation for <script type="math/tex">G_n = T_n</script>, the exponential becomes one and thus <script type="math/tex">b_n = b_0</script>. So if the target is achieved, then the basefee would always be <script type="math/tex">b_0</script> (which is typically very low)?</p>
<p>Here I will explain why this is not the case. Let’s see how this happens in linear EIP-1559 first. Here is a very simple economic model: Let’s assume a there is a “fair price” for gas at price <script type="math/tex">p</script>. For simplicity assume that below <script type="math/tex">p</script>, demand is infinite: All blocks will be filled. Above <script type="math/tex">p</script>, blocks will be empty. And when the base fee is equal to <script type="math/tex">p</script>, then there will be exactly enough demand to fill the blocks to half.</p>
<p>So what will happen in this model in linear EIP-1559?</p>
<ul>
<li>We assume that the base fee starts of low with <script type="math/tex">% <![CDATA[
b_0 < p %]]></script>.</li>
<li>Then there will be a number of <script type="math/tex">n_0</script> blocks where the basefee is lower than <script type="math/tex">p</script>, which will be completely full by our assumption.</li>
<li>When the basefee reaches <script type="math/tex">p</script> at <script type="math/tex">n_0</script>, all futher blocks will be filled to target.</li>
</ul>
<p>To keep things simple, say the max block size is exactly <script type="math/tex">2T</script>. Then how much gas has been used at any block <script type="math/tex">n>n_0</script>? It would be <script type="math/tex">G_n = 2 n_0 T + (n - n_0) T = n_0 T + n T = T_n + n_0 T</script>. So it is actually not exactly the target, despite the EIP-1559 mechanism now being in equilibrium.</p>
<p>What happened? Note that it is still correct to say that EIP-1559 will ensure <script type="math/tex">G_n \approx T_n</script>; for example, if you take the fraction <script type="math/tex">\frac{G_n}{T_n}</script>, it will tend to <script type="math/tex">1</script> so in asymptotic notation we would write <script type="math/tex">G_n \sim T_n</script>.</p>
<p>But there is a constant difference between <script type="math/tex">G_n</script> and <script type="math/tex">T_n</script>, which is the amount of gas that was needed to shift the basefee from <script type="math/tex">b_0</script> to <script type="math/tex">p</script>.</p>
<p>More generally, the current difference between <script type="math/tex">G_n</script> and <script type="math/tex">T_n</script> is a constant determined by the current basefee. In the linear version, this is approximately true; in the exponential version, it is exactly true (it follows directly from the definition <script type="math/tex">b_{n} = b_0 \exp\left(\frac{1}{TA}\left(G_n - T_n\right)\right)</script>)</p>
<h2 id="graphical-illustration">Graphical illustration</h2>
<p>First let’s look at our simplified example. The following graph illustrates what is happening: Above you see the relation between total gas used and the basefee (an exponential function). Once in equilibrium, the basefee is at <script type="math/tex">p</script> which means that <script type="math/tex">G_n-T_n = n_0 T</script>. Below we see how this adds up. The green shaded area of gas used is the blocks filled up to the target, and the orange area is the gas used above the target that sums up to <script type="math/tex">n_0 T</script>.
<img src="/assets/eip1559-1.png" alt="" /></p>
<p>Next we look at a generic example, where some blocks are filled with more than target gas and some blocks are below. In this case, we need to sum up all the gas consumed by each block above the target (marked in green) and substract the gas that was below the target in underfull blocks (marked with red and white diagonal stripes). We can then read the sum on the <script type="math/tex">x</script> axis of the basefee relation to determine the basefee <script type="math/tex">b_n</script>.
<img src="/assets/eip1559-2.png" alt="" /></p>
<h2 id="appendix-using-differential-equations-to-derive-the-exponential-form">Appendix: Using differential equations to derive the exponential form</h2>
<p>Here is another way of deriving the exponential form, using differential equations:</p>
<p>Let’s start from the update rule of linear EIP-1559</p>
<script type="math/tex; mode=display">b_{i+1} = b_i \cdot \left(1+ \frac{1}{A}\frac{g_i - T}{T}\right)</script>
<p>We can rewrite this as</p>
<script type="math/tex; mode=display">b_{i+1} - b_i = b_i \frac{1}{A}\frac{g_i - T}{T}</script>
<p>Tis is mathematically a <a href="https://en.wikipedia.org/wiki/Linear_recurrence_with_constant_coefficients">difference equation</a>. This is similar to a differential equation, but for finite differences. However, we can “approximate” it as a differential equation, by changing <script type="math/tex">i</script> into a continuous variable and writing</p>
<script type="math/tex; mode=display">b'(i) = b(i) \frac{1}{A}\frac{g(i) - T}{T}</script>
<p>where we use that <script type="math/tex">b(i)-b(i+1) \approx b'(i)</script>. You could say that this form comes about naturally if you assume that we are making the blocks smaller and smaller, scaling the target as well. As the size of the blocks goes towards zero, we get the differential equation.</p>
<p>This is a linear ordinary differential equation of first order that can be solved by moving the terms depending on <script type="math/tex">b</script> to one side:</p>
<script type="math/tex; mode=display">\frac{b'(i)}{b(i)} = \frac{1}{A}\frac{g(i) - T}{T}</script>
<p>Integrating on both sides yields:</p>
<script type="math/tex; mode=display">\int\frac{b'(i)}{b(i)} = \ln b(i) = \int_{i=0}^n \frac{1}{A}\frac{g(i) - T}{T} + C</script>
<p>This is because</p>
<script type="math/tex; mode=display">\frac{\mathrm{d}}{\mathrm{d}i} \ln b(i) = \frac{b'(i)}{b(i)}</script>
<p>So we can exponentiate and get</p>
<script type="math/tex; mode=display">b(i) = \exp\left(\int_{i=0}^n \frac{1}{A}\frac{g(i) - T}{T} + C\right)</script>
<p>Since we want <script type="math/tex">b(0)=b_0</script> we get <script type="math/tex">C=\ln b_0</script> and so</p>
<script type="math/tex; mode=display">b(i) = b_0\exp\left(\int_{i=0}^n \frac{1}{A}\frac{g(i) - T}{T}\right)</script>
<p>This looks almost the same that we derived above – except we now have an integral instead of a sum, because we are working with the continuous form.</p>
<p>The exponential version thus arises naturaly when you consider very small blocks and update the basefee each time. From this perspective, it is the most natural way of implementing EIP-1559.</p>Exponential EIP-1559 explainer内积证明2021-11-18T17:00:00+00:002021-11-18T17:00:00+00:00https://dankradfeist.de/ethereum/2021/11/18/inner-product-arguments-mandarin<p>原文链接： <a href="/ethereum/2021/07/27/inner-product-arguments.html">Inner Product Arguments</a></p>
<p>翻译：Star.LI @ Trapdoor Tech</p>
<h2 id="介绍">介绍</h2>
<p>你也许听说过“BulletProofs”：它是一种零知识证明算法，不要求可信设置。比如，门罗币（Monero）就用了这个算法。这种证明系统的核心是内积证明<sup id="fnref:2"><a href="#fn:2" class="footnote">1</a></sup>，一个能让证明者向验证者证明“内积“正确性的小技巧。内积，即是计算两个向量中每个分量的乘积和：</p>
<script type="math/tex; mode=display">\begin{align*} \vec a \cdot \vec b = a_0 b_0 + a_1 b_1 + a_2 b_2 + \cdots + a_{n-1} b_{n-1}
\end{align*}</script>
<p>其中 <script type="math/tex">\vec a = (a_0, a_1, \ldots, a_{n-1})</script>, <script type="math/tex">\vec b = (b_0, b_1, \ldots, b_{n-1})</script>。</p>
<p>一个比较有趣的例子就是当我们设置向量 <script type="math/tex">\vec b</script> 为某个 <script type="math/tex">z</script> 的幂，即 <script type="math/tex">\vec b = (1, z, z^2, \ldots, z^{n-1})</script>，那么它的内积就变成了多项式</p>
<script type="math/tex; mode=display">\begin{align*} f(X) = \sum_{i=1}^{n-1} a_i X^i
\end{align*}</script>
<p>在 <script type="math/tex">z</script> 点的取值。</p>
<p>内积证明采用<em>Pedersen承诺</em>。我之前写过一篇文章介绍<a href="/ethereum/2021/10/13/kate-polynomial-commitments-mandarin.html">KZG承诺</a>，Pedersen承诺与其类似，承诺值也是在椭圆曲线上的，不同的是它不需要可信设置。下面比较这两个多项式承诺方案(PCS)，KZG承诺方案 和 Pedersen承诺与内积证明相结合的方案:</p>
<table>
<thead>
<tr>
<th> </th>
<th>Pedersen+IPA</th>
<th>KZG</th>
</tr>
</thead>
<tbody>
<tr>
<td>安全假设</td>
<td>离散对数</td>
<td>双线性群</td>
</tr>
<tr>
<td>可信设置</td>
<td>否</td>
<td>是</td>
</tr>
<tr>
<td>承诺大小</td>
<td>1个群元素</td>
<td>1个群元素</td>
</tr>
<tr>
<td>证明大小</td>
<td>2 log n个 群元素</td>
<td>1个群元素</td>
</tr>
<tr>
<td>验证</td>
<td>O(n) 群运算</td>
<td>1个配对</td>
</tr>
</tbody>
</table>
<p>说到底，和KZG承诺相比，我们这个承诺方案的效率要低一些。证明大小更大（<script type="math/tex">O(\log n)</script>），不过对数本身还是挺小的，所以不至于太糟。但可惜的是验证者需要做的计算是线性的, 这失去了简洁性。这些局限使得 Pedersen 承诺对于某些应用来说不太现实，但在一些情况下这些缺点可以被规避。</p>
<ul>
<li>其中一个例子我在之前的文章<a href="/ethereum/2021/06/18/pcs-multiproofs.html">多重打开</a>中曾经提到过。这里的诀窍是你可以将多个打开聚合成一个。</li>
<li>Halo2 <sup id="fnref:1"><a href="#fn:1" class="footnote">2</a></sup>, 其中多个打开的线性成本可以被聚合。</li>
</ul>
<p>在以上这两个例子中，特点就是多个打开的成本被分摊了。如果你只想打开一个多项式，那就比较困难了，你需要承担整个打开的运算成本。</p>
<p>但是，Pedersen 承诺与内积证明结合的方案，很大的好处是较少的安全假设。也就是，不需要配对，并且不需要可信设置。</p>
<h2 id="pedersen-承诺">Pedersen 承诺</h2>
<p>在我们讨论内积证明之前，我们要先看一下依赖的结构：Pedersen承诺。为了使用Pedersen承诺，我们需要一个椭圆曲线 <script type="math/tex">G</script>。让我们首先回顾一下我们使用椭圆曲线可以做到哪些事情（在这里我会使用加法符号表示，看起来更自然一些）：</p>
<ol>
<li>你可以将两个椭圆曲线点 <script type="math/tex">g_0 \in G</script> 和 <script type="math/tex">g_1 \in G</script> 相加: <script type="math/tex">h = g_0 + g_1</script></li>
<li>你可以将元素<script type="math/tex">g \in G</script>与一个标量<script type="math/tex">a \in \mathbb F_p</script>相乘，其中<script type="math/tex">p</script>是<script type="math/tex">G</script>椭圆曲线的阶 (即元素数量): <script type="math/tex">h=ag</script></li>
</ol>
<p>无法计算两个曲线元素的“乘积”：“<script type="math/tex">h * h</script>”运算是未定义的，所以你没办法计算“<script type="math/tex">h * h = a g * a g = a^2 g</script>”；与此相反，与标量相乘是很容易计算的，比如 <script type="math/tex">2 h = 2 a g</script>。</p>
<p>另一个重要的性质就是不存在有效的计算“离散对数”的算法，这意味着对于满足<script type="math/tex">h = a g</script>的给定的<script type="math/tex">h</script>和<script type="math/tex">g</script>，如果你不知道<script type="math/tex">a</script>，<script type="math/tex">a</script>是不可计算的，我们称<script type="math/tex">a</script>为<script type="math/tex">h</script>对于<script type="math/tex">g</script>的离散对数。</p>
<p>Pedersen承诺则利用该不可计算性来构造承诺方案。假设有两个点<script type="math/tex">g_0</script>和<script type="math/tex">g_1</script>，它们的离散对数(比如存在<script type="math/tex">x \in \mathbb F_p</script>使得<script type="math/tex">g_1 = x g_0</script>)并不可知，那么我们可以向两个数<script type="math/tex">a_0, a_1 \in \mathbb F_p</script>提交承诺:</p>
<script type="math/tex; mode=display">\begin{align*}
C = a_0 g_0 + a_1 g_1
\end{align*}</script>
<p><script type="math/tex">C</script>为椭圆曲线<script type="math/tex">G</script>的一个元素。</p>
<p>为打开一个承诺，证明者给验证者<script type="math/tex">a_0</script>和<script type="math/tex">a_1</script>，然后验证者计算<script type="math/tex">C</script>，如果相等的话就被接受。</p>
<p>承诺方案的中心性质在于它是不是绑定（binding）。给定<script type="math/tex">C=a_0 g_0 + a_1 g_1</script>，一个试图作弊的证明者是否能够生成<script type="math/tex">b_0, b_1 \in \mathbb F_p</script>并使验证者接受它们，即同时满足<script type="math/tex">C = b_0 g_0 + b_1 g_1</script> 且<script type="math/tex">b_0, b_1 \not= a_0, a_1</script>。</p>
<p>如果有人能做到上述行为的话，那么它们也可以找出离散对数。为什么呢？我们知道<script type="math/tex">a_0 g_0 + a_1 g_1 = b_0 g_0 + b_1 g_1</script>，整理后可得</p>
<script type="math/tex; mode=display">\begin{align*} (a_0 - b_0) g_0 = (b_1 - a_1) g_1
\end{align*}</script>
<p>所以<script type="math/tex">a_0 − b_0</script>和<script type="math/tex">b_1 − a_1</script>不可同时为0。假说<script type="math/tex">a_0 − b_0</script>不为零，我们得到：</p>
<script type="math/tex; mode=display">\begin{align*} g_0 = \frac{b_1 - a_1}{a_0 - b_0} g_1 = x g_1
\end{align*}</script>
<p>对于<script type="math/tex">x = \frac{b_1 - a_1}{a_0 - b_0}</script>。这样我们就找到了<script type="math/tex">x</script>的值。我们知道该问题是难题，所以在现实中没有攻击者能够做到。</p>
<p>这意味着对于攻击者来说找到另外的<script type="math/tex">b_0 , b_1</script>来打开承诺<script type="math/tex">C</script>在计算性上是不可能的。（它们的确存在，只是无法通过计算获得 – 就像哈希碰撞）。</p>
<p>我们可以扩展一个向量的承诺，比如说一个标量列表<script type="math/tex">a_0, a_1, \ldots, a_{n-1} \in \mathbb F_p</script>。我们只是需要一个“基”，即一个相等数量的互相未知离散对数的群元素，然后我们就可以计算承诺了：</p>
<script type="math/tex; mode=display">\begin{align*} C = a_0 g_0 + a_1 g_1 + a_2 g_2 + \ldots + a_{n-1} g_{n-1}
\end{align*}</script>
<p>这给了我们一个向量承诺，尽管这个承诺相对复杂：为了打开任意元素，必须打开所有的元素。但这里有一个重要的性质：这个承诺方案是加同态的，这意味着如果我们有另外一个承诺<script type="math/tex">D = b_0 g_0 + b_1 g_1 + b_2 g_2 + \ldots + b_{n-1} g_{n-1}</script>，那么我们可能可以通过添加两个承诺得到两个向量<script type="math/tex">\vec a</script> 和 <script type="math/tex">\vec b</script>的和:</p>
<script type="math/tex; mode=display">\begin{align*} C + D = (a_0 + b_0) g_0 + (a_1 + b_1) g_1 + (a_1 + b_1) g_2 + \ldots + (a_{n-1} + b_{n-1}) g_{n-1}
\end{align*}</script>
<p>因为有了加同态的性质，这个向量承诺就变得有用了。</p>
<h2 id="内积证明">内积证明</h2>
<p>内积证明的基本策略是“分治法”：将一个问题规约成多个同类型的子问题，而不是试图一步完全解决它。当子问题规约到一定程度的时候，就可以简单解决。</p>
<p>在这当中每一步，问题的大小都会减半。这保证了<script type="math/tex">\log n</script>步后，问题大小会减少到1，可以通过简单证明解决。</p>
<p>假设我们需要证明的承诺 <script type="math/tex">C</script> 具有以下形式：</p>
<script type="math/tex; mode=display">\begin{align*} C = \vec a \cdot \vec g + \vec b \cdot \vec h + (\vec a \cdot \vec b) q
\end{align*}</script>
<p>其中 <script type="math/tex">\vec g = (g_0, g_1, \ldots, g_{n-1}), \vec h = (h_0, h_1, \ldots, h_{n-1})</script> ，且 <script type="math/tex">q</script> 是我们的“基”，即：它们是群 <script type="math/tex">G</script> 中的元素，并且它们之间对于任意一方的离散对数都未知。同时介绍一种新的表示方法：<script type="math/tex">\vec a \cdot \vec g</script>，一个由标量组成的向量（<script type="math/tex">\vec a</script>）和另一个由群元素组成的向量（<script type="math/tex">\vec g</script>）的乘积，我们将其定义为</p>
<script type="math/tex; mode=display">\begin{align*} \vec a \cdot \vec g = a_0 g_0 + a_1 g_1 + \cdots + a_{n-1} g_{n-1}
\end{align*}</script>
<p>也就是说，我们要证明 <script type="math/tex">C</script> 是以下元素的承诺</p>
<ul>
<li>基为 <script type="math/tex">\vec g</script> 的向量 <script type="math/tex">\vec a</script></li>
<li>基为 <script type="math/tex">\vec h</script> 的向量 <script type="math/tex">\vec b</script></li>
<li>基为 <script type="math/tex">q</script> 的内积 <script type="math/tex">\vec a \cdot \vec b</script> 。</li>
</ul>
<p>单看本身似乎并不是很有用 – 在大多数应用中我们想让验证者知道 <script type="math/tex">\vec a \cdot \vec b</script>，而不是仅仅将这个结果藏在一个承诺里。这可以通过我后面要讲到的一个小技巧来解决。</p>
<h2 id="证明">证明</h2>
<p>我们想让证明者向验证者证明 <script type="math/tex">C</script> 的确具有形式 <script type="math/tex">C = \vec a \cdot \vec g + \vec b \cdot \vec h + (\vec a \cdot \vec b) q</script>。就像之前提到的，不是直接证明，而是要把这个问题规约成，如果这一性质对于另一个承诺 <script type="math/tex">C′</script> 成立，则 <script type="math/tex">C</script> 也满足该性质。</p>
<p>接下来证明者就要和验证者玩一个小游戏了。证明者提交一些信息，然后验证者发起一个挑战，从而引出下一个承诺 <script type="math/tex">C′</script>。将它称作一个游戏不代表这个证明必须是交互的：Fiat-Shamir算法允许我们通过将挑战换成一个承诺的抗碰撞哈希值，从而将交互式的证明转化成非交互式的。</p>
<h3 id="证明描述">证明描述</h3>
<p>承诺 <script type="math/tex">C</script> 符合 <script type="math/tex">C = \vec a \cdot \vec g + \vec b \cdot \vec h + (\vec a \cdot \vec b) q</script> 的形式，并且以<script type="math/tex">\vec g, \vec h, q</script>为基。我们将符合这样形式的 <script type="math/tex">C</script> 称为拥有“内积性质”。</p>
<h3 id="规约步骤">规约步骤</h3>
<p>假设 <script type="math/tex">m = \frac{n}{2}</script>, 证明者计算</p>
<script type="math/tex; mode=display">\begin{align*} z_L = a_m b_0 + a_{m+1} b_1 + \cdots + a_{n-1} b_{m-1} = \vec a_R \cdot \vec b_L \\ z_R = a_0 b_m + a_{1} b_{m+1} + \cdots + a_{m-1} b_{n-1} = \vec a_L \cdot \vec b_R
\end{align*}</script>
<p>这里我们定义 <script type="math/tex">\vec a_L</script> 为向量 <script type="math/tex">\vec a</script> 的“左半部”，<script type="math/tex">\vec a_R</script> 为“右半部”，向量 <script type="math/tex">\vec b</script> 类似。</p>
<p>然后证明者计算如下的承诺：</p>
<script type="math/tex; mode=display">\begin{align*} C_L = \vec a_R \cdot \vec g_L + \vec b_L \cdot \vec h_R + z_L q \\ C_R = \vec a_L \cdot \vec g_R + \vec b_R \cdot \vec h_L + z_R q
\end{align*}</script>
<p>并发送给验证者。然后验证者发送挑战 <script type="math/tex">x \in \mathbb F_p</script> (通过使用Fiat-Shamir将它变成非交互式的，这意味着 <script type="math/tex">x</script> 是 <script type="math/tex">C_L</script> 和 <script type="math/tex">C_R</script> 的哈希)。证明者以此来计算更新的向量：</p>
<script type="math/tex; mode=display">\begin{align*} \vec a' = \vec a_L + x \vec a_R \\ \vec b' = \vec b_L + x^{-1} \vec b_R
\end{align*}</script>
<p>长度为原向量的一半。</p>
<p>现在，验证者计算新的承诺：</p>
<script type="math/tex; mode=display">\begin{align*} C' = x C_L + C + x^{-1} C_R
\end{align*}</script>
<p>还有更新的基：</p>
<script type="math/tex; mode=display">\begin{align*} \vec g' = \vec g_L + x^{-1} \vec g_R \\ \vec h' = \vec h_L + x \vec h_R
\end{align*}</script>
<p>现在，如果新的承诺 <script type="math/tex">C′</script> 符合了 <script type="math/tex">C' = \vec a' \cdot \vec g' +\vec b' \cdot \vec h' + \vec a' \cdot \vec b' q</script>的形式 - 那么承诺<script type="math/tex">C</script>就遵从最初的假设，拥有“内积性质”。</p>
<p>所有的向量大小都减半 – 这让我们离成功又近一步。在这里我们替换 <script type="math/tex">C:=C'</script>, <script type="math/tex">\vec g := \vec g'</script>，<script type="math/tex">\vec h := \vec h'</script> 并重复以上步骤。</p>
<p>接下来我解释这个方法可行的数学原理，同时推荐你们去看看 Vitalik 所做的一个漂亮的<a href="https://twitter.com/VitalikButerin/status/1371844878968176647">可视化展示</a> 以得到一些直观的感受。</p>
<h3 id="最终步骤">最终步骤</h3>
<p>当我们一直重复以上步骤，n每次降低一半。最终我们会到达<script type="math/tex">n=1</script>，然后就可以停下了。这时证明者发送<script type="math/tex">\vec a</script>和<script type="math/tex">\vec b</script>两个向量，事实上就是两个标量，然后验证者就可以非常直观地计算：</p>
<script type="math/tex; mode=display">\begin{align*} D = a g + b h + a b q
\end{align*}</script>
<p>如果等于 <script type="math/tex">C</script> 就接受，反之则拒绝。</p>
<h3 id="正确性correctness-及合理性soundness">正确性(correctness) 及合理性(soundness)</h3>
<p>在之前我假设 <script type="math/tex">C′</script> 为我们所需要的形式，然后以此证明 <script type="math/tex">C</script>也成立。现在我要证明为什么该逻辑成立。我们需要验证以下两点：</p>
<ul>
<li><em>正确性</em> – 即当证明者遵从相应操作的时候，它们可以达到说服验证者该结论为正确的目的；</li>
<li><em>合理性</em> – 即试图作弊的证明者不能使用错误的证明通过验证者验证，或者成功率低至可忽略不计。</li>
</ul>
<p>让我们从正确性证起，假设证明者依照操作进行每一个步骤。既然如此，我们知道给定 <script type="math/tex">\vec g, \vec h, q</script> 为基，<script type="math/tex">C = \vec a \cdot \vec g + \vec b \cdot \vec h + (\vec a \cdot \vec b) q</script>，同时 <script type="math/tex">C'= \vec a' \cdot \vec g' +\vec b' \cdot \vec h' + \vec a' \cdot \vec b' q</script>。</p>
<p>验证者计算<script type="math/tex">C' = x C_L + C + x^{-1} C_R</script>。</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{eqnarray} C' & = & x C_L + C + x^{-1} C_R \\ & = & x ( \vec a_R \cdot \vec g_L + \vec b_L \cdot \vec h_R + z_L q) \\ & & + \vec a_L \cdot \vec g_L + \vec a_R \cdot \vec g_R + \vec b_L \cdot \vec h_L + \vec b_R \cdot \vec h_R + \vec a \cdot \vec b q \\ & & + x^{-1} (\vec a_L \cdot \vec g_R + \vec b_R \cdot \vec h_L + z_R q) \\ & = & (x \vec a_R + \vec a_L)\cdot(\vec g_L + x^{-1} \vec g_R) \\ & & + (\vec b_L + x^{-1} \vec b_R)\cdot(\vec h_L + x \vec h_R) \\ & & + (x z_L + \vec a \cdot \vec b + x^{-1} z_R) q \\ &=& (x \vec a_R + \vec a_L)\cdot \vec g' + (\vec b_L + x^{-1} \vec b_R)\cdot \vec h' + (x z_L + \vec a \cdot \vec b + x^{-1} z_R) q
\end{eqnarray} %]]></script>
<p>为了使承诺具有内积属性，我们需要验证 <script type="math/tex">(x \vec a_R + \vec a_L) \cdot (\vec b_L + x^{-1} \vec b_R) = x z_L + \vec a \cdot \vec b + x^{-1} z_R</script>。这个等式成立，因为</p>
<script type="math/tex; mode=display">% <![CDATA[
\begin{eqnarray} (x \vec a_R + \vec a_L) \cdot (\vec b_L + x^{-1} \vec b_R) & = & x \vec a_R \cdot \vec b_L + \vec a_L \cdot \vec b_L + \vec a_R \cdot \vec b_R + x^{-1} \vec a_L \cdot \vec b_R \\ & = & x z_L + \vec a \cdot \vec b + x^{-1} z_R \end{eqnarray} %]]></script>
<p>这样正确性就证明了。为证明合理性，我们需要证明，证明者的初始承诺 <script type="math/tex">C</script> 不具有内积属性，那么通过规约步骤，是无法产生一个具有内积属性的承诺 <script type="math/tex">C'</script>。</p>
<p>假设证明者提交了 <script type="math/tex">C=\vec a \cdot \vec g + \vec b \cdot \vec h + r q</script>，其中 <script type="math/tex">r \neq \vec a \cdot \vec b</script>。如果我们走一遍如上的规约步骤，我们会得到</p>
<script type="math/tex; mode=display">\begin{align*} C' = (x \vec a_R + \vec a_L)\cdot \vec g' + (\vec b_L + x^{-1} \vec b_R)\cdot \vec h' + (x z_L + r + x^{-1} z_R) q
\end{align*}</script>
<p>所以现在我们假设证明者成功作弊，那么 <script type="math/tex">C′</script> 就满足了内积属性，则有：</p>
<script type="math/tex; mode=display">\begin{align*} (x \vec a_R + \vec a_L) \cdot (\vec b_L + x^{-1} \vec b_R) = x z_L + r + x^{-1} z_R
\end{align*}</script>
<p>展开左侧可得</p>
<script type="math/tex; mode=display">\begin{align*} x \vec a_R \cdot \vec b_L + \vec a \cdot \vec b + x^{-1} \vec a_L \cdot \vec b_R = x z_L + r + x^{-1} z_R
\end{align*}</script>
<p>注意证明者可以自由选择 <script type="math/tex">z_L</script> 和 <script type="math/tex">z_R</script>，所以我们不能直接假设它们会遵从以上定义。</p>
<p>同时乘以 <script type="math/tex">x</script> 并移到同一边，我们得到 <script type="math/tex">x</script> 的二次方程式：</p>
<script type="math/tex; mode=display">\begin{align*} x^2 ( \vec a_R \cdot \vec b_L - z_L) + x (\vec a \cdot \vec b - r) + (\vec a_L \cdot \vec b_R - z_R )
\end{align*}</script>
<p>除非所有项都为零，该等式至多会有两个解 <script type="math/tex">x \in \mathbb F_p</script>，但是验证者是在证明者已经承诺了他们的 <script type="math/tex">r</script>, <script type="math/tex">z_L</script> 和 <script type="math/tex">z_R</script>值后选择 <script type="math/tex">x</script>值，证明者能够成功作弊的概率非常小；我们通常选择域 <script type="math/tex">\mathbb F_p</script> 大小约为 <script type="math/tex">2^{256}</script>。因此，当证明者不按照协议选择正确值时，验证者选择到一个让等式能够成立的 <script type="math/tex">x</script> 值的概率微乎其微。</p>
<p>这就完成了我们的合理性证明。</p>
<h3 id="仅在最后计算基变化">仅在最后计算基变化</h3>
<p>验证者在每一轮都需要做两件事：计算挑战 <script type="math/tex">x</script>，并计算更新的基 <script type="math/tex">\vec g'</script> 和 <script type="math/tex">\vec h'</script>。但是在每一轮都更新 <script type="math/tex">g</script> 效率很低，验证者可以简单地保存他们在 <script type="math/tex">k</script> 轮中遇到的挑战值 <script type="math/tex">x_1 , x_2 \dots x_k</script>。</p>
<p>假设<script type="math/tex">k</script>轮后，这些基为 <script type="math/tex">\vec g_k, \vec h_k</script>。元素<script type="math/tex">g_\ell</script>和<script type="math/tex">h_\ell</script>是标量（或者长度为1的向量），因为长度到达1的时候我们会终止协议。通过<script type="math/tex">\vec g_0</script>计算<script type="math/tex">\vec g_\ell</script>，是一个长度为n的椭圆曲线上的多标量点乘（MSM）。<script type="math/tex">\vec g_0</script>的标量因子是下面多项式的系数</p>
<script type="math/tex; mode=display">\begin{align*} f_g(X) = \prod_{j=0}^{k-1} \left(1+x^{-1}_{k-j} X^{2^{j}}\right)
\end{align*}</script>
<p>且<script type="math/tex">\vec h_0</script>的标量因子由以下多项式给定</p>
<script type="math/tex; mode=display">\begin{align*} f_h(X) = \prod_{j=0}^{k-1} \left(1+x_{k-j} X^{2^{j}}\right)
\end{align*}</script>
<h2 id="使用内积证明来验证多项式值">使用内积证明来验证多项式值</h2>
<p>针对我们的主要应用 – 验证 <script type="math/tex">f(x) = \sum_{i=1}^{n-1} a_i x^i</script>在<script type="math/tex">z</script>处的取值 – 我们需要对协议做一些小的扩充。</p>
<ul>
<li>最重要的一点是，我们想要验证 <script type="math/tex">f(z) = \vec a \cdot \vec b</script> 的结果，而不仅是承诺 <script type="math/tex">C</script> 拥有“内积属性”。</li>
<li><script type="math/tex">\vec b = (1, z, z^2, ..., z^{n-1})</script> 对于验证者来说是已知的。因此我们可以把这部分从承诺中移除来简化协议。</li>
</ul>
<h3 id="如何构造承诺">如何构造承诺</h3>
<p>如果我们想要验证多项式<script type="math/tex">f(x) = \sum_{i=1}^{n-1} a_i x^i</script>，我们通常需要从承诺<script type="math/tex">F = \vec a \cdot \vec g</script>开始进行构造。证明者可以将<script type="math/tex">y=f(z)</script>的计算发送给验证者。</p>
<p>那么貌似验证者可以计算最初的承诺<script type="math/tex">C=\vec a \cdot \vec g + \vec b \cdot \vec h + \vec a \cdot \vec b q = F + \vec b \cdot \vec h + f(z) q</script>，因为他们已知<script type="math/tex">\vec b = (1, z, z^2, ..., z^{n-1})</script>，然后开始证明流程。</p>
<p>但稍等一下。大多数情况下，<script type="math/tex">F</script>是证明者生成的承诺，一个恶意的证明者可以在这里作弊，比如说提交一个<script type="math/tex">F = \vec a \cdot \vec g + tq</script>。在这种情况下，因为证明者生成的承诺有一个偏移，他们能够证明<script type="math/tex">f(z) = y - t</script>。</p>
<p>为了避免这种情况，我们需要在证明流程中进行一点改变。 收到承诺<script type="math/tex">F</script>和计算结果<script type="math/tex">y</script>后，验证者生成一个向量<script type="math/tex">w</script>并且重新选择基<script type="math/tex">q:=wq</script>，之后证明继续。因为证明者不能预判<script type="math/tex">w</script>的取值，它们就无法成功操控除了<script type="math/tex">f(z)</script>之外的结果（或者说概率极小）。</p>
<p>注意如果想要得到一个通用的内积，我们还要防止证明者操控向量<script type="math/tex">\vec b</script> – 但在多项式取值的应用中，<script type="math/tex">\vec b</script>的部分可以完全去掉不用考虑，因此这里略过细节。</p>
<h3 id="如何去掉第二个向量">如何去掉第二个向量</h3>
<p>注意，如果我们想要进行多项式计算，验证者已知向量<script type="math/tex">\vec b = (1, z, z^2, ..., z^{n-1})</script>。给定挑战<script type="math/tex">x_0, x_1, \ldots, x_\ell</script>，他们可以通过在“仅在最终计算基变化”一节中提到的技巧简单地得到<script type="math/tex">b_\ell</script>。</p>
<p>因此，我们可以从所有承诺中移除第二个向量并且只计算<script type="math/tex">b_\ell</script>。这意味着验证者必须要能够从初始向量<script type="math/tex">\vec b_0 = (1, z, z^2, ..., z^{n-1})</script>中计算最终的<script type="math/tex">b_\ell</script>。因为<script type="math/tex">\vec b</script>的规约过程与基向量<script type="math/tex">\vec g</script>相同，线性组合也由之前定义的多项式<script type="math/tex">f_g</script>的系数定义，也就是说<script type="math/tex">b_\ell=f_g(z)</script>。</p>
<h3 id="针对点值形式多项式的ipa">针对点值形式多项式的IPA</h3>
<p>目前为止，我们用一个内积证明计算了使用它的系数提交的多项式，即一个由<script type="math/tex">f(X) = \sum_{i=0}^{n-1} f_i X^i</script>定义的多项式中的<script type="math/tex">f_i</script>。然而，多数情况下我们想要一个在给定定义域<script type="math/tex">x_0, x_1, \ldots, x_{n-1}</script>计算值定义的多项式。因为任何阶低于<script type="math/tex">n−1</script>的多项式都是由<script type="math/tex">f(x_0), f(x_1), \ldots, f(x_{n-1})</script>的计算结果定义的独一无二的多项式，所以这两者是完全相等的。但是这两者之间的转换在计算上非常费时，如果定义域适用快速傅立叶转换的话需要花费<script type="math/tex">O(n \log n)</script>次计算，否则就是<script type="math/tex">O(n^2)</script>次。</p>
<p>为了避免这项开销，我们尝试避免使用多项式系数形式。这可以通过提交多项式值<script type="math/tex">f</script>的承诺的而不是系数的承诺来实现：</p>
<script type="math/tex; mode=display">\begin{align*}
C = f(x_0) g_0 + f(x_1) g_1 + \cdots + f(x_{n-1}) g_{n-1}
\end{align*}</script>
<p>这表示我们IPA的向量<script type="math/tex">\vec a</script>形式为<script type="math/tex">\vec a = (f(x_0), f(x_1), \ldots, f(x_{n-1}))</script>：</p>
<p><a href="/ethereum/2021/06/18/pcs-multiproofs.html#evaluating-a-polynomial-in-evaluation-form-on-a-point-outside-the-domain">重心公式</a> 使我们现在可以计算这个新的承诺多项式的取值，记作：</p>
<script type="math/tex; mode=display">\begin{align*}
f(z) = A(z)\sum_{i=0}^{n-1} \frac{f(x_i)}{A'(x_i)} \frac{1}{z-x_i}
\end{align*}</script>
<p>如果我们选择向量<script type="math/tex">\vec b</script></p>
<script type="math/tex; mode=display">\begin{align*} b_i = \frac{A(z)}{A'(x_i)} \frac{1}{z-x_i}
\end{align*}</script>
<p>我们可以得到<script type="math/tex">\vec a \cdot \vec b = f(z)</script>，因此采用这种向量的IPA可以被用作证明点值多项式的取值。除了这一点差异之外，其他的证明过程是完全相同的。</p>
<div class="footnotes">
<ol>
<li id="fn:2">
<p>Bowe, Grigg, Hopwood: <a href="https://eprint.iacr.org/2019/1021.pdf">Recursive Proof Composition without a Trusted setup</a> <a href="https://dankradfeist.de/ethereum/2021/07/27/inner-product-arguments.html#fnref:1">↩</a> <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:1">
<p>Bootle, Cerulli, Chaidos, Groth, Petit: <a href="https://eprint.iacr.org/2016/263.pdf">Efficient Zero-Knowledge Arguments forArithmetic Circuits in the Discrete Log Setting</a> <a href="https://dankradfeist.de/ethereum/2021/07/27/inner-product-arguments.html#fnref:2">↩</a> <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>原文链接： Inner Product ArgumentsKZG多项式承诺2021-10-13T00:00:00+00:002021-10-13T00:00:00+00:00https://dankradfeist.de/ethereum/2021/10/13/kate-polynomial-commitments-mandarin<p>原文链接： <a href="/ethereum/2020/06/16/kate-polynomial-commitments.html">KZG Polynomial Commitments</a></p>
<p>翻译：Star.LI @ Trapdoor Tech</p>
<h2 id="简介">简介</h2>
<p>今天我想向你们介绍一下Kate，Zaverucha和Goldberg发表的多项式承诺方案 <sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>。这篇文章并不涉及复杂的数学及密码学理论知识，仅作为一篇简介。</p>
<p>该方案通常被称作卡特（Kate，读作<a href="https://www.cs.purdue.edu/homes/akate/howtopronounce.html">kah-tay</a>）多项式承诺方案。在一个多项式承诺方案中，证明者计算一个多项式的承诺（commitment）, 并可以在多项式的任意一个点进行打开（opening）：该承诺方案能证明多项式在特定位置的值与指定的数值一致。</p>
<p>之所以被称为<em>承诺</em>，是因为当一个承诺值（椭圆曲线上的一个点）发送给某对象（ <em>验证者</em>），证明者不可以改变当前计算的多项式。它们只能够对一个多项式提供有效的证明；当试图作弊时，它们要不无法提供证明，要不证明被验证者拒绝。</p>
<h3 id="预备知识">预备知识</h3>
<p>如果你对有限域，椭圆曲线和配对这几个话题不是很熟悉的话，非常推荐去读一读<a href="https://vitalik.ca/general/2017/01/14/exploring_ecp.html">Vitalik Buterin的博客：椭圆曲线配对</a>这篇文章。</p>
<h3 id="默克尔树对比">默克尔树对比</h3>
<p>如果你已经熟知默克尔树，我想在此之上和卡特承诺进行对比。默克尔树即是密码学家所说的<em>矢量承诺</em>：运用一个深度为<script type="math/tex">d</script>的默克尔树，你可以计算一个矢量的承诺（矢量为一个固定长度的列表<script type="math/tex">a_0, \ldots, a_{2^d-1}</script>）。运用熟知的<em>默克尔证明</em>，你可以用<script type="math/tex">d</script>个哈希来提供证明元素<script type="math/tex">a_i</script>存在于这个矢量的位置<script type="math/tex">i</script>。</p>
<p>事实上，我们可以用默克尔树来构造多项式承诺：回忆一下，一个<script type="math/tex">n</script>次的多项式<script type="math/tex">p(X)</script>，无非是一个函数 <script type="math/tex">p(X) = \sum_{i=0}^{n} p_i X^i</script>，其中<script type="math/tex">p_i</script>是该多项式的系数。</p>
<p>通过设置<script type="math/tex">a_i=p_i</script>，我们可以计算这一系列系数的默克尔树根，从而比较容易地对一个<script type="math/tex">n=2^{d}-1</script>次的多项式进行承诺。证明一个取值，意味着证明者想要向验证者展示对于某个值z，<script type="math/tex">p(z)=y</script>。为达到这个目的，证明者可以向验证者发送所有的<script type="math/tex">p_i</script>，然后验证者计算p(z)是否等于y。</p>
<p>当然，这是一个极度简单化的多项式承诺，但它能帮助我们理解真实的多项式承诺的益处。让我们一起回顾多项式承诺的性质：</p>
<ol>
<li>承诺的大小是一个单一哈希（默克尔树根）。一个足够安全的加密散列一般需要256位，即32字节。</li>
<li>为了证明一个取值，证明者需要发送所有的<script type="math/tex">p_i</script>，所以证明的大小和多项式次数是线性相关的。同时，验证者需要做同等的线性量级的计算（他们需要计算多项式在<script type="math/tex">z</script>点的取值，即计算<script type="math/tex">p(z)=\sum_{i=0}^{n} p_i z^i</script>）。</li>
<li>该方案不隐藏多项式的任何部分 - 证明者一个系数接一个系数地发送完整的多项式。</li>
</ol>
<p>现在让我们一起来看看卡特方案是如何达成以上要求的：</p>
<ol>
<li>承诺大小是一个支持配对的椭圆曲线群元素。比如说对于BLS12_381曲线，大小应是48字节。</li>
<li>证明大小<em>独立</em>于多项式大小，永远是一个群元素。验证，同样独立于多项式大小，无论多项式次数为多少都只要两次群乘法和两次配对。</li>
<li>大多数时候该方案隐藏多项式 - 事实上，无限多的多项式将会拥有完全一样的卡特承诺。但是这并不是完美隐藏：如果你能猜多项式（比如说该多项式过于简单，或者它存在于一个很小的多项式集合中），你就可以找到这个被承诺的多项式。</li>
</ol>
<p>还有一点，在一个承诺中合并任意数量的取值证明是可行的。这些性质使得卡特方案对于零知识证明系统来说非常具有吸引力，例如PLONK和SONIC。同时对于一些更日常的目的，或者简单的作为一个矢量承诺来使用也是非常有趣的场景，接下来的文章中我们就会看到。</p>
<h2 id="椭圆曲线以及配对">椭圆曲线以及配对</h2>
<p>正如之前所提到的预备知识所说，我强烈推荐<a href="https://vitalik.ca/general/2017/01/14/exploring_ecp.html">Vitalik Buterin的博客：椭圆曲线配对</a>。本文包含了本文所需的背景知识：特别是有限域，椭圆曲线和配对相关知识。</p>
<p>假设<script type="math/tex">\mathbb G_1</script>和<script type="math/tex">\mathbb G_2</script>是两条满足<script type="math/tex">e: \mathbb G_1 \times \mathbb G_2 \rightarrow \mathbb G_T</script>的配对，假设p是<script type="math/tex">\mathbb G_1</script>和<script type="math/tex">\mathbb G_2</script>的阶，同时G和H是<script type="math/tex">\mathbb G_1</script>和<script type="math/tex">\mathbb G_2</script>的生成元。接下来，我们定义一个非常有效的速记符号：对于任意<script type="math/tex">x \in \mathbb F_p</script>
<script type="math/tex">\displaystyle
[x]_1 = x G \in \mathbb G_1 \text{ and } [x]_2 = x H \in \mathbb G_2</script></p>
<h3 id="可信设置">可信设置</h3>
<p>假设我们已有一个可信设置，使得对于一个秘密s，其子元素<script type="math/tex">[s^i]_1</script>和<script type="math/tex">[s^i]_2</script>都对于任意<script type="math/tex">i=0, \ldots, n-1</script>的证明者和验证者有效。</p>
<p>有一种方法能够达到这种可信设置：我们用离线计算机生成一个随机数<script type="math/tex">s</script>，计算所有的群元素<script type="math/tex">[s^i]_x</script>，并通过电线传输出去（不包括<script type="math/tex">s</script>）,然后烧掉这部计算机。当然这并不是一个好的解决方案，你必须相信计算机的操纵者没有通过其他渠道泄露这个秘密<script type="math/tex">s</script>。</p>
<p>在实际应用中，这种设置通常采用安全多方计算（MPC），使用一组计算机来创建这个群元素，而没有任何单一计算机知道秘密s，这样只有挟持了整组计算机才能知道s。</p>
<p>注意这里有一件事是不可能的：你不能仅仅选择一个随机群元素<script type="math/tex">[s]_1</script>（其中<script type="math/tex">s</script>是未知的）然后通过它计算其他的群元素。不知道<script type="math/tex">s</script>是无法计算<script type="math/tex">[s^2]_1</script>的。</p>
<p>好了，椭圆曲线密码学基础告诉我们通过可信设置的群元素是无法破解<script type="math/tex">s</script>的，它是有限域<script type="math/tex">\mathbb F_p</script>中的一个数字，但证明者无法找出它的具体数值。他们只能在给定的元素上做一些特定的计算。举个例子，他们可以用椭圆曲线乘法轻易地计算<script type="math/tex">c [s^i]_1 = c s^i G = [cs^i]_1</script>，或者说将椭圆曲线点值相加算出<script type="math/tex">c [s^i]_1 + d [s^j]_1 = (c s^i + d s^j) G = [cs^i + d s^j]_1</script>。实际上如果<script type="math/tex">p(X) = \sum_{i=0}^{n} p_i X^i</script>是一个多项式，证明者可以计算
<script type="math/tex">\displaystyle
[p(s)]_1 = [\sum_{i=0}^{n} p_i s^i]_1 = \sum_{i=0}^{n} p_i [s^i]_1</script></p>
<p>这就显得非常有趣 – 通过使用这套可信设置，任何人都可以计算出一个多项式在一个谁也不知道的秘密点s上的值。只是他们得到的输出值不是一个自然数，而是一个椭圆曲线点<script type="math/tex">[p(s)]_1 = p(s) G</script>，这已经足够有用。</p>
<h2 id="卡特承诺">卡特承诺</h2>
<p>在卡特承诺方案中，元素<script type="math/tex">C = [p(s)]_1</script>是多项式<script type="math/tex">p(X)</script>的承诺。</p>
<p>这样你可能会问了，证明者是不是在不知道<script type="math/tex">s</script>的情况下找到另一个有相同承诺的多项式<script type="math/tex">q(X) \neq p(X)</script>，使得<script type="math/tex">[p(s)]_1 = [q(s)]_1</script>？我们假设这个推理成立，那么就是说<script type="math/tex">[p(s) - q(s)]_1=[0]_1</script>，即<script type="math/tex">p(s)-q(s)=0</script>。</p>
<p><script type="math/tex">r(X) = p(X)-q(X)</script>本身就是一个多项式。我们知道它不是常数，因为<script type="math/tex">p(X) \neq q(X)</script>。有一个非常著名的定理，即是任意非常数的<script type="math/tex">n</script>次多项式至多可以有<script type="math/tex">n</script>个零点，这是因为如果<script type="math/tex">r(z)=0</script>，<script type="math/tex">r(X)</script>就可以被线性因子<script type="math/tex">X−z</script>整除；因为每一个零点都意味着可以被一个线性因子整除，同时每经过一次除法会降低一阶，所以推理可知至多存在<script type="math/tex">n</script>个零点<sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup>。</p>
<p>因为证明者不知道<script type="math/tex">s</script>，他们只能通过在尽可能多的地方让<script type="math/tex">p(X)−q(X)=0</script>来使得<script type="math/tex">p(s)−q(s)=0</script>。如上所证，他们只能在至多<script type="math/tex">n</script>个点上使<script type="math/tex">p(s)−q(s)=0</script>，那么成功的可能性就很小，因为<script type="math/tex">n</script>比起曲线的次数<script type="math/tex">p</script>要小很多，<script type="math/tex">s</script>被选中成为<script type="math/tex">p(X)=q(X)</script>成立点的概率是微乎其微的。来感受一下这个概率的大小，假设我们采用现有最大的可信设置，当<script type="math/tex">n = 2^{28}</script>，把它来和曲线顺序<script type="math/tex">p \approx 2^{256}</script>对比：攻击者设立的多项式<script type="math/tex">q(X)</script>来与<script type="math/tex">p(X)</script>尽可能多的重合，<script type="math/tex">n=2^{28}</script>个点，得到相同承诺（p(s)=q(s)）的概率是<script type="math/tex">2^{28}/2^{256} = 2^{28-256} \approx 2 \cdot 10^{-69}</script>。这是一个非常低的概率，在现实中意味着攻击者没有办法施行该攻击。</p>
<h3 id="多项式相乘">多项式相乘</h3>
<p>目前为止我们学习了在一个秘密<script type="math/tex">s</script>的多项式取值是可计算的，这就使得我们可以对一个独一无二的多项式进行承诺 - 对于同一个承诺<script type="math/tex">C=[p(s)]_1</script>存在多个多项式，但是在实践中它们其实是无法计算的（这就是密码学家所说的<em>绑定</em> （computationally binding））。</p>
<p>但是，我们仍缺少在不发送给验证者完整多项式的情况下“打开”这个承诺的能力。为了达到这个目的，我们需要用到配对。如上所述，我们可以对这个秘密进行线性操作；举个例子，我们可以计算<script type="math/tex">p(X)</script>的承诺<script type="math/tex">[p(s)]_1</script>，还可以通过两个承诺<script type="math/tex">p(X)</script>和<script type="math/tex">q(X)</script>来计算<script type="math/tex">p(X)+q(X)</script>的联合承诺：<script type="math/tex">[p(s)]_1+[q(s)]_1=[p(s)+q(s)]_1</script>。</p>
<p>现在我们所缺少的就是两个多项式的乘法。如果我们做到乘法，就能利用多项式的性质打开更多酷炫玩法的大门。尽管椭圆曲线本身不允许作乘法，幸运的事我们可以通过配对解决这个问题：我们有</p>
<p><script type="math/tex">\displaystyle
e([a]_1, [b]_2) = e(G, H)^{(ab)} = [ab]_T</script>
在这里介绍一个新的标识方法：<script type="math/tex">[x]_T = e(G, H)^x</script>。这样，尽管我们不能直接<em>在椭圆曲线上</em>直接将两个元素相乘得到它们的乘积，一个椭圆曲线元素（这就是所谓全同态加密/FHE的一个性质；椭圆曲线仅是<em>加同态</em>)。如果是在不同的曲线上（比如一个在<script type="math/tex">\mathbb G1</script>，另一个在<script type="math/tex">\mathbb G2</script>上）提交承诺，我们可以将两个字段元素相乘，这样所得到的输出就是一个<script type="math/tex">\mathbb G_T</script>元素。</p>
<p>这里我们就看到了卡特证明的核心。记得我们之前提到的线性因子：如果一个多项式在<script type="math/tex">z</script>处有零点，那么它就可以被<script type="math/tex">X−z</script>整除。同理反向可证 - 如果多项式可以被<script type="math/tex">X−z</script>整除，那么它必在<script type="math/tex">z</script>处有零点。可被<script type="math/tex">X−z</script>整除，意味着对于某个多项式<script type="math/tex">q(X)</script>我们可得<script type="math/tex">p(X)=(X−z)⋅q(X)</script>，并且很明显在<script type="math/tex">X=z</script>处得到零点。</p>
<p>举个例子，我们想要证明<script type="math/tex">p(z)=y</script>，使用多项式<script type="math/tex">p(X)−y</script> – 明显该多项式在<script type="math/tex">z</script>处达到零点，这样我们就可以应用线性因子的知识。取多项式<script type="math/tex">q(X)</script>，<script type="math/tex">p(X)−y</script>被线性因子<script type="math/tex">X−z</script>除，即：</p>
<script type="math/tex; mode=display">\displaystyle
q(X) = \frac{p(X)-y}{X-z}</script>
<p>这就等同于<script type="math/tex">q(X)(X-z) = p(X)-y</script>。</p>
<h3 id="卡特证明">卡特证明</h3>
<p>定义<script type="math/tex">p(z)=y</script>的卡特证明为<script type="math/tex">π=[q(s)]_1</script>，记得多项式<script type="math/tex">p(X)</script>的承诺是<script type="math/tex">C=[p(s)]_1</script>.</p>
<p>验证者用如下等式来确认这个证明：</p>
<p><script type="math/tex">\displaystyle
e(\pi,[s-z]_2) = e(C-[y]_1, H)</script>
注意验证者可以计算<script type="math/tex">[s−z]_2</script>，因为这仅是可信设置的元素<script type="math/tex">[s]_2</script>和多项式被计算的点<script type="math/tex">z</script>的一个组合。同样的，验证者已知了<script type="math/tex">y</script>是取值<script type="math/tex">p(z)</script>，所以他们也可以计算<script type="math/tex">[y]_1</script>。那么为什么上述证明能向验证者证明<script type="math/tex">p(z)=y</script>，或者更准确地说，<script type="math/tex">C</script>所提交的多项式在<script type="math/tex">z</script>点的取值是<script type="math/tex">y</script>？</p>
<p>这里我们需要考证两个性质：<em>正确性</em> 和 <em>可靠性</em>。 <em>正确性</em> 指的是如果证明者遵循我们定义的步骤，他们就可以产出一个能被验证的证明。这个通常难度不大。还有就是<em>可靠性</em>，这个性质是指证明者不会产出一个“不正确”的证明 – 比如说，他们不会欺骗验证者对于某个<script type="math/tex">y′≠y</script>，<script type="math/tex">p(z)=y′</script>。</p>
<p>接下来我们先写出配对组的对应等式：
<script type="math/tex">\displaystyle
[q(s) \cdot (s-z)]_T = [p(s) - y]_T</script>
<em>正确性</em>非常一目了然 – 这就是等式<script type="math/tex">q(X)(X−z)=p(X)−y</script>在一个没人知道的随机点<script type="math/tex">s</script>的取值。</p>
<p>那么，我们怎么才能知道它的可靠性，证明者不会创建假的证明呢？让我们从多项式的角度来看待这个问题。如果证明者想依循我们的方法来构建一个证明，他们就需要用<script type="math/tex">X−z</script>来除<script type="math/tex">p(X)−y′</script>。但是<script type="math/tex">p(z)−y′</script>并不为零，无论怎么除都会有一个余数，所以他们就无法进行这个多项式除法。这样一来，证明者就无法用这个方法进行伪造了。</p>
<p>剩下的就只能直接在椭圆群中想办法了：如果说对于某个承诺<script type="math/tex">C</script>，他们可以计算椭圆群元素</p>
<p><script type="math/tex">\displaystyle
\pi_\text{Fake} = \frac{1}{s-z} (C-[y']_1)</script>
一旦成立，那证明者就可以为所欲为了。感觉上这是很难做到的，你必须用和s相关的什么东西来求幂，但s又是未知的。为了严格证明，你需要针对证明和配对的一个密码学假设，即所谓的<script type="math/tex">q</script>-strong SDH假设 <sup id="fnref:3"><a href="#fn:3" class="footnote">3</a></sup>。</p>
<h3 id="多重证明">多重证明</h3>
<p>为这里为止我们已经看到了如何在一个单点上证明一个多项式取值，这是已经是非常了不起的一件事：你可以仅靠发送单个的群元素（可以是48字节大小，例如BLS12_381）来证明任何次数的多项式 - 比如说<script type="math/tex">2^{28}</script>次 – 在任意点的取值。作为对比，在一个简单的把默克尔树用作多项式承诺的例子中，我们需要发送<script type="math/tex">2^{28}</script>个元素，即这个多项式所有的系数。</p>
<p>更进一步，我们来看看如何仅使用一个群元素，来计算并证明一个多项式在<em>任意多个点</em> 的取值。首先我们需要了解一个新概念：插值多项式。有一个包含k个点的列表<script type="math/tex">(z_0, y_0), (z_1, y_1), \ldots, (z_{k-1}, y_{k-1})</script>，我们随时都可以找到一个次数小于<script type="math/tex">k</script>的多项式来经过这些点。其中一个方法是利用拉格朗日插值，这样我们可以得到该多项式的公式I(X)：</p>
<p><script type="math/tex">I(X) = \sum_{i=0}^{k-1} y_i \prod_{j=0 \atop j \neq i}^{k-1} \frac{X-z_j}{z_i-z_j}</script>
现在我们假设已知<script type="math/tex">p(X)</script>经过了所有的点，那么多项式<script type="math/tex">z_0, z_1, \ldots, z_{k-1}</script>都是零点。这就意味着多项式可被所有的线性因子：<script type="math/tex">(X-z_0), (X-z_1), \ldots (X-z_{k-1})</script>整除，我们将它们组合在一起，称为零多项式：</p>
<p><script type="math/tex">\displaystyle
Z(X) = (X-z_0) \cdot (X-z_1) \cdots (X-z_{k-1})</script>
我们可以计算商值</p>
<p><script type="math/tex">\displaystyle
q(X) = \frac{p(X) - I(X)}{Z(X)}</script>
注意，因为<script type="math/tex">p(X)−I(X)</script>能被<script type="math/tex">Z(X)</script>所有的线性因子整除，所以它能被<script type="math/tex">Z(X)</script>本身整除。</p>
<p>现在我们可以定义这个计算<script type="math/tex">(z_0, y_0), (z_1, y_1), \ldots, (z_{k-1}, y_{k-1})</script>的卡特证明：<script type="math/tex">\pi=[q(s)]_1</script> – 这仍然仅是一个群元素。</p>
<p>为了验证这个证明，验证者同样需要计算插值多项式<script type="math/tex">I(X)</script>和零多项式<script type="math/tex">Z(X)</script>，使用这些结果他们可以计算<script type="math/tex">[Z(s)]_2</script>和<script type="math/tex">[I(s)]_1</script>，然后就可以确认配对等式：</p>
<p><script type="math/tex">\displaystyle
e(\pi,[Z(s)]_2) = e(C-[I(s)]_1, H)</script>
将该等式写成配对，我们可以像单点上的卡特证明一样简单地确认它是否能够成立：</p>
<p><script type="math/tex">\displaystyle
[q(s)\cdot Z(s)]_T = [p(s)-I(s)]_T</script>
这就非常酷炫了：仅仅提供一个群元素，你就能证明任何数量的计算，甚至是百万个！这相当于通过48个字节来证明海量的计算。</p>
<h2 id="将卡特作为矢量承诺来使用">将卡特作为矢量承诺来使用</h2>
<p>尽管卡特承诺被设计成多项式承诺，但它作为矢量承诺来使用也大有用处。回忆一下，一个矢量承诺是针对矢量<script type="math/tex">a_0, \ldots, a_{n-1}</script>的承诺，并且允许你证明任意位置<script type="math/tex">i</script>对应<script type="math/tex">a_i</script>。我们可以使用卡特承诺的方案来重现这一场景：使<script type="math/tex">p(X)</script>为对所有的<script type="math/tex">i</script>计算 <script type="math/tex">p(i)=a_i</script>的一个多项式，我们知道这样一个多项式存在，并且可以通过拉格朗日插值来计算它：</p>
<p><script type="math/tex">\displaystyle
p(X) = \sum_{i=0}^{n-1} a_i \prod_{j=0 \atop j \neq i}^{n-1} \frac{X-j}{i-j}</script>
使用这个多项式，我们可以就可以利用一个单一群元素来证明这个矢量中任意数量的元素！注意到比起默克尔树（在证明大小方面）这个方案更加高效：仅证明一个元素，默克尔证明就需要花费<script type="math/tex">\log n</script>大小的哈希！</p>
<h2 id="延伸阅读">延伸阅读</h2>
<p>为了得到一个无状态版本的以太坊，我们正在积极探索卡特承诺的应用。在这里我强烈建议在ethresearch论坛中使用关键字<a href="https://ethresear.ch/search?q=kate">Kate</a>来搜索你感兴趣的话题。</p>
<p>另一篇很赞的博文是Vitalik的<a href="https://vitalik.ca/general/2019/09/22/plonk.html">introduction to PLONK</a>，其中大量运用到了多项式承诺，其中卡特方案就是多项式承诺实现的主要方案。</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>https://www.iacr.org/archive/asiacrypt2010/6477178/6477178.pdf <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>这个结果经常被错误引用为代数基本定理。实际上代数基本理论是其反推的结论。该结论在代数封闭的域中才有:效。而代数基本定理是对于复数而言，所有n次的多项式都有n个线性因子。很可惜这个简单一点的版本没有简洁易记的名字，尽管它可以说比代数基本定理更基本一些。 <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:3">
<p>https://www.cs.cmu.edu/~goyal/ibe.pdf <a href="#fnref:3" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>原文链接： KZG Polynomial CommitmentsProofs of Custody2021-09-30T00:00:00+00:002021-09-30T00:00:00+00:00https://dankradfeist.de/ethereum/2021/09/30/proofs-of-custody<p><em>Thanks to Vitalik Buterin, Chih-Cheng Liang and Alex Stokes for helpful comments</em></p>
<p>A proof of custody is a construction that helps against the “lazy validator” problem. A lazy validator is a validator that instead of doing the work they are supposed to do – for example, ensuring that some data is available (relevant for data sharding) or that some execution was performed correctly (for execution chains) – they pretend that they’ve done it and sign the result, for example an attestations that claims the data is available anyway.</p>
<p>The proof of custody construction is a cryptoeconomic primitive that changes the game theory so that lazy validating simply isn’t an interesting strategy anymore.</p>
<h2 id="lazy-validators--the-game-theory">Lazy validators – the game theory</h2>
<p>Let’s assume there is a well-running Ethereum 2.0 chain (insert your favourite alternative PoS blockchain if you prefer). We don’t usually expect that bad things – data being withheld, invalid blocks being produced happens. In fact, you are likely to not see them ever happen, because as long as the system is run by a majority of honest validators there is no point in even trying to attack it in one of these ways. Since the attack is pretty much guaranteed to fail, there is no point in even doing it.</p>
<p>Now assume you run a validator. This comes with different kinds of costs – obviously the staking captial, but also hardware costs, electricity and internet bandwidth, which you might pay for directly (your provider charges you per GB) or indirectly (when your validator is running, your netflix lags). The lower you can make this cost, the more net profits you make from running your validator.</p>
<p>One of the tasks you do as a validator in sharded Eth2, is to assure the availability of shard data. Each attestation committee is assigned one blob of data to check, which is around 512 kB to 1 MB. The task of each validator is to download it and store it for around 90 days.</p>
<p>But what happens if you simply sign all attestations for shard blobs, without actually downloading the data? You would still get your full rewards, but your costs have suddently decreased. We are assuming the network is in a good state, so your laziness isn’t going to do anything to the network immediately. Let’s say your profit of running a validator was $1 per attestation, and the cost of downloading all the blocks was $0.10 per year. Now your profit has increased to $1.10.</p>
<table>
<thead>
<tr>
<th> </th>
<th>Profit per signed attestation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Honest</td>
<td>$1.00</td>
</tr>
<tr>
<td>Lazy</td>
<td>$1.10</td>
</tr>
</tbody>
</table>
<p>This problem is called the verifier’s dilemma and was introduced in <a href="https://eprint.iacr.org/2015/702.pdf">Demystifying Incentives in the Consensus Computer</a> by Luu et al.</p>
<h3 id="but-i-would-never-do-this-who-would-cheat-like-that">But I would never do this! Who would cheat like that?</h3>
<p>It often seems obvious to us that in games like this, surely you would not succumb to bribery and stay with the honest behaviour. But it’s often more subtle than that.</p>
<p>Let’s assume that after having run a validator for years, a new client comes out that claims to be 10% more cost effective. People run it and see that it works, and it seems to be safe. The way it actually does this is by not downloading the shard blocks.</p>
<p>This could even happen by accident. Someone cut some corners in the development process, everything looks normal, it’s just that it doesn’t join the right shard subnet and nobody missed this, because it does not cause any faults in normal operation.</p>
<p>Some people will probably run this client.</p>
<p>Something else that could happen is that a service could step in to do the downloading for you. For $0.01 per shard blob, they will download the data, store it for 90 days, and send you a message that the data is available and you can sign the attestation. How bad is this?</p>
<p>It’s also quite bad. Because as many people start using this service, it becomes a single point of failure. Or even worse, it could be part of an attack. If it can make more than 50% of validators vote for the availability of a shard blob, without ever publishing the blob, that would be a withholding attack.</p>
<p>As it is often the case, dishonesty can come in many disguises, so our best bet is to work on the equilibrium to make the honest strategy rational.</p>
<h2 id="a-proof-of-custody-and-an-update-to-the-game-theory">A proof of custody and an update to the game theory</h2>
<p>The proof of custody works like this: Imagine we can put a “bomb” in a shard blob: If you sign this blob, you get a large penalty (you get slashed), of $3,000. You definitely don’t want to sign this blob.</p>
<p>Does that make you want to download it? That is certainly one way to avoid signing the bomb. But if anyone can detect the bomb, then someone can simply write a service that warns you before signing an attestation if it’s a bomb. So the bomb needs to be specific to an individual validator, and noone else can compute whether a shard blob is a bomb.</p>
<p>OK, now we have the essential ingredients for the proof of custody. We need</p>
<ol>
<li>An ephemeral secret, that is recomputed every custody epoch (ca. 90 days), individual to each validator, and then revealed when it has expired (so that other validators have a chance to check the proof of custody)</li>
<li>A function that takes the whole shard blob data, as well as the ephemeral key, and outputs 0 (not a bomb), or, with very small probability, 1 (this blob is a bomb)</li>
</ol>
<p>It is essential that the ephemeral secret isn’t made available to anyone else, so there are three slashing conditions:</p>
<ol>
<li>A validator can get slashed if anyone knows its current ephemeral secret</li>
<li>The ephemeral secret has to be published after the custody period, and failing to do so also leads to slashing</li>
<li>Signing a bomb leads to slashing</li>
</ol>
<p>How can we create this function? A simple construction works like this. Compute a Merkle tree of leaves <code class="highlighter-rouge">(data0, secret, data1, secret, data2, secret, ...)</code> as illustrated here:</p>
<div class="mermaid">graph TB
A[Root] -->B[Hash]
A --> B1[Hash]
B --> C[Hash]
B --> C1[Hash]
C --> D[data0]
C --> E[secret]
C1 --> D1[data1]
C1 --> E1[secret]
B1 --> C2[Hash]
B1 --> C3[Hash]
C2 --> D2[data2]
C2 --> E2[secret]
C3 --> D3[data3]
C3 --> E3[secret]
</div>
<p>Then take the logical <code class="highlighter-rouge">AND</code> of the first 10 bits. This gives you a single bit that’s 1 in an expected 1 in 1024 times.</p>
<p>This function cannot be computed without knowing both the secret and the data.</p>
<p>(Because we do want to enable secret shared validators, a lot of work has gone into optimizing this function so that it can be efficiently computed in an MPC, which a Merkle tree cannot. For this we are suggesting a construction based on a Universal Hash Function and the Legendre symbol: https://ethresear.ch/t/using-the-legendre-symbol-as-a-prf-for-the-proof-of-custody/5169)</p>
<h3 id="new-game-theory">New game theory</h3>
<p>All right, so with the proof of custody, any shard blob has a 1/1,024 chance of being a bomb, and you don’t know which one it is without downloading it.</p>
<p>The lazy validator does just fine when the blob is not a bomb. However, when it is a bomb, we see the big difference: The honest validator simply skips this attestation, which is very minor an simply sets the profit to zero. However, the lazy validator signs it and will get slashed, making a huge loss. The payoff matrix now looks like this:</p>
<table>
<thead>
<tr>
<th> </th>
<th>Profit for non-bomb attestation</th>
<th>Profit for bomb attestation</th>
<th>Average for 1,024 attestations</th>
</tr>
</thead>
<tbody>
<tr>
<td>Honest</td>
<td>$1.00</td>
<td>$0.00</td>
<td>$1,023.00</td>
</tr>
<tr>
<td>Lazy</td>
<td>$1.10</td>
<td>$-3,000.00</td>
<td>$-1,873.60</td>
</tr>
</tbody>
</table>
<p>In the third column, we see that the expected profit for the lazy validator is now negative. Since the whole reason for being lazy was increased profits from lower costs, this means that the lazy validator is not an interesting strategy anymore.</p>
<h2 id="proof-of-custody-for-execution">Proof of custody for execution</h2>
<p>Another task of validators will be verifying the correct execution of blocks. This means verifying that the new stateroot that is part of a block is the correct one that results from applying all the transactions. The proof of custody idea can also be applied to this: The validator will have to compute the proof of custody in the same way as described above, however the data is the <em>execution trace</em>. The execution trace is some output generated by the step by step execution of the block. It does not have to be complete in any sense; what we want from it is just two properties:</p>
<ol>
<li>It should be difficult to guess the execution trace without actually executing the block.</li>
<li>The total size of the execution trace should be large enough that simply distributing it in addition to normal blocks is unattractive.</li>
</ol>
<p>There are some easy options of doing this; for example simply outputting every single instruction byte that the EVM executes would probably result in an execution trace of a few MB per execution block. Another option would be to use the top of the stack.</p>
<h3 id="with-fraud-proofs-do-we-still-need-the-proof-of-custody-for-execution">With fraud proofs, do we still need the proof of custody for execution?</h3>
<p>When we upgrade the execution chain to statelessness, which means that blocks can be verified without having the current state, fraud proofs become easy. (Without statelessness, they are hard: Fraud proofs always have to be included on a chain <em>different</em> from the one where the fraud happened, and thus the actual pre-state would not be available when they have to be verified.)</p>
<p>This means that it will be possible to slash a validator who has produced an invalid execution block. Furthermore we can also penalize any validator that has attested to this block. Would that mean that the proof of custody is no longer necessary?</p>
<p>It does certainly shift the balance. But even with this penalty present, lazy validation can still be a rational strategy. It would probably be a bad idea for a validator to simply sign every block without verifying execution, as an attacker only needs to sacrifice a single validator of their own to get you slashed.</p>
<p>However, you can employ the following strategy: On each new block, you wait for some small percentage of other validators to sign it before you sign it yourself. Those who sign it first are unlikely to be lazy validators, as they would be employing the same strategy. This would get you quite good protection in most situations, but at a systemic level it would still leave the chain vulnerable in extreme cases.</p>
<p>The case with fraud proofs is thus improved, but a proof of custody remains superior for ensuring that lazy validation can’t be a rational strategy.</p>
<h2 id="how-is-it-different-from-data-availability-checks">How is it different from data availability checks?</h2>
<p>I wrote a primer on data availability checks <a href="https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html">here</a>. It looks like the proof of custody for shard blobs tries to solve a very similar problem: Ensuring that data that is committed to in shard blob headers is actually available on the network.</p>
<p>So we may wonder: Do we need both a proof of custody and data availability checks?</p>
<p>There is an important difference between the two constructions, though:</p>
<ul>
<li>Data availability checks ensure the availability of the data <em>independent of the honest majority assumption</em>. Even a powerful attacker controlling the entirety of the stake can’t trick full nodes into accepting data is available that is actually withheld</li>
<li>In contrast, a proof of custody does not help if the majority of the stake is performing an attack. The majority can compute the proof of custody without ever releasing the data to anyone else.</li>
</ul>
<p>So in a theoretical sense, data availability checks are strictly superior to proof of custody for shard data: They hold unconditionally, whereas the latter only serve to keep rational validators honest, making an attack less likely.</p>
<p>Why do we still need a proof of custody for shard blobs? It might not necessarily be needed. There are however some practical problems with data availability checks that make it desirable to have a “first line of defence” against missing data:</p>
<p>The reason for this is that data availability checks work by excluding unavailable blocks from the fork choice rule. However, this cannot be permanent: data availability checks only ensure that <em>eventually</em>, everyone will see the same result, but not immediately.</p>
<p>The reason for this is that publishing a partially available block, might result in some nodes seeing it as available (they are seeing all their samples) and some other nodes as unavailable (missing some of the samples). Data availability checks ensure that in this situation, the data can always be reconstructed. However, this needs some node to first get enough samples to reconstruct the data, and then re-seed the samples so everyone can see them; this process can take a few slots.</p>
<p>In order to avoid a minority attacker (with less than 1/3 of the stake) to cause such a disruption, we only want to apply data availability checks when the chain is finalized and not immediately. In the meantime, the proof of custody can ensure that an honest majority will only ever build an available chain, where the shard data is already seeded in committees; since the committees are ready to re-seed all samples even if the original blob producer doesn’t, an attacker can’t easily force a partially available block.</p>
<p>In this construction, the proof of custody and data availability checks have two orthogonal functions:</p>
<ol>
<li>The proof of custody for shard data ensures that an honest majority of validators will only ever build a chain in which all shard data is available and well seeded across committees. A minority attacker cannot easily cause disruption to this.</li>
<li>Data availability checks will guarantee that even if the majority of stake is attacking, they will not be able to get the remaining full nodes to consider a chain with withheld data as finalized.</li>
</ol>Thanks to Vitalik Buterin, Chih-Cheng Liang and Alex Stokes for helpful commentsJust because it has a fixed supply doesn’t make it a good store of value2021-09-27T10:00:00+00:002021-09-27T10:00:00+00:00https://dankradfeist.de/ethereum/2021/09/27/store-of-value-from-limited-supply<h2 id="what-we-should-really-build-is-productive-assets-and-stablecoins">What we should really build is productive assets and stablecoins</h2>
<p><em>Special thanks to David Andolfatto, Vitalik Buterin, Chih-Cheng Liang, Barnabé Monnot and Danny Ryan for comments that helped me improve this essay</em></p>
<p>I think the “store of value” narrative and the misunderstanding of what “fiat” currency really is are a huge problem undermining the whole of the cryptocurrency world. Only when we come to an honest understanding of this will we really be able to build something better.</p>
<p>Here are some core theses of what I believe and which I will try to illustrate in the full article:</p>
<ol>
<li>The “store of value” narrative doesn’t hold water. There is no such thing as a guaranteed way of transmitting value into the future, and just having an asset with a fixed supply doesn’t fix that.</li>
<li>If you want your best bet on sending the most value possible into the future, what you really need is productive assets (for long term) and stablecoins (if you need you money in the near future).</li>
</ol>
<h2 id="why-store-of-value-does-not-exist">Why “store of value” does not exist</h2>
<p>Here is a common form of the cryptocurrency narrative: “Look at fiat currency. 1 US Dollar from 1950 had about 10 times more purchasing power than one US dollar now. It’s a scam. If you store your value in US dollars, then you are constantly losing due to inflation. This is because the central bank/government can just print more US Dollars. You should instead store value in an asset with predictable supply, such as gold or Bitcoin, which does not have this problem.”</p>
<p>The true part of this statement is that if you stored your money in USD, then you would have lost a large part of your purchasing power over the decades. That is not in question. The question is, is there another way, implied by the term “store of value”, that does not have this property? Store of value proponents claim that there is if you instead used an asset with a predictable supply. And of course, historical data backs this up to some extent: If you had used gold instead of storing your value in USD, then you would have fared better: You could have bought an ounce for $35 in 1950, and it would now be worth around $1765 (price as of June 20 2021 from <a href="https://www.bullionbypost.eu/gold-price/alltime/ounces/USD/">here</a>). Given that the Dollar is worth 10x less now due to inflation, that’s $176.50 in 1950-Dollars or a 5x increase in value.</p>
<p>But we could have done much better than this: If we put the $35 in an S&P 500 tracker in 1950, then we would now have a staggering <a href="https://www.officialdata.org/us/stocks/s-p-500/1950?amount=35&endYear=2021">$74,418.65</a>, which is a 212x increase <strong>after correcting for the 10x loss in purchasing power</strong> of the US Dollar (so 7,441.87 1950-Dollars). So clearly, this investment is a much better “store of value” than investing in gold.</p>
<p>Now Bitcoin has fared <em>much</em> better than both gold and the S&P 500 over the last 10 years. However this is a very short timespan, in which Bitcoin went from an absolutely tiny niche to an asset that most people in the world have heard of and some significant minority has invested in. There is no reason to believe that this can be repeated (I don’t think it can). The historical data for gold says, over long periods of time, stores of value that are purely based on “limited supply” do much worse than productive assets.<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup></p>
<p>So why do people believe gold, or Bitcoin, would make a better store of value than just investing in productive assets like companies, real estate, etc.? There are two reasons that I can see:</p>
<ol>
<li>Stock markets clearly have a lot of volatility. So maybe they believe productive assets are a good long-term store of value, but not for the short term.</li>
<li>The people who believe in “limited supply” stores of value have an apocalyptic mindset. So they believe that in the case of a major social collapse, their stores of value will somehow fare better than more productive assets.</li>
</ol>
<p>Argument number 1 does not convince me at all. That would depend on their preferred store of value having lower volatility than productive assets, which simply <a href="https://seekingalpha.com/article/4296091-gold-vs-stocks">does not bear out in reality</a>. Both gold and Bitcoin are much more volatile than holding an S&P 500 tracker fund. If you want low volatility, then you should still go for the productive assets.</p>
<p>Number 2 means that you can simply “send” value into the future even when society collapses. I think that’s a pretty crazy belief – because when society collapses both the value you can buy as well as the demand for the “limited supply asset” will do as well.</p>
<p>Of course, people think companies (and therefore the S&P 500) will probably go down, but other assets don’t fare any better:</p>
<ol>
<li>Is property a good “store of value” in a catastrophe? Property is mostly valuable because of where it is in relation to valuable economic and social activity. Central Manhattan property is so valuable because it’s in a city where many want to live. A random plot of land in the middle of nowhere usually has very little value. It’s unlikely to fare that well in a major disaster (and might even work out worse than property with a garden to grow your own vegetables)</li>
<li>Similarly the value attributed to gold is a social convention, albeit one that has lasted for an extremely long time. Society could decide on a new asset to value highly, which is indeed what Bitcoiners argue for. But more importantly, your gold isn’t worth anything if there’s nothing of value to buy.</li>
</ol>
<p>If we accept that value depends on a society that provides valuable goods, we have to accept that there is simply no guaranteed way to send money into the future. You might as well make real investments in productive assets.</p>
<h2 id="what-we-need--productive-assets-and-stablecoins">What we need – productive assets and stablecoins</h2>
<p>Above, I argued why I think “limited supply stores of value” (unproductive assets like gold or Bitcoin, that derive their value simply from being scarce and not utility value) are of no advantage to productive assets like stocks. They have the same or higher volatility, but at least for gold (for which we have a decent amount of history) is outperformed by productive assets in the long term. The same will probably happen to Bitcoin once it’s absorbed the initial demand and has arrived at a stable position like gold (other results, with it largely losing its current value, are certainly also possible). They also don’t necessarily fare better in catastrophes; if this is what you’re afraid of you might want to buy goods that are useful in a catastrophe instead.</p>
<p>This means productive assets should be the better long-term stores of values, as they are better on all dimensions.</p>
<p>But clearly the volatility that comes with them is undesirable for many applications that fiat currency is used for now: I don’t think many people would appreciate their salary fluctuating by 50% month on month; in fact the vast majority of people would struggle to pay for all their expenses if their salaries suddenly fell by 50%. Many people simply need or want much more stability than that.</p>
<p>Similarly, if you keep money around to buy a house in the near future, or run a company that keeps cash reserves to make sure they can pay their employees and suppliers, you need stability.</p>
<p>Even if we assumed that everyone suddenly started using Bitcoin, it would simply not fix this problem. Since its supply can’t be dynamically adjusted, its value would continue to be very volatile due to economic fluctuations.</p>
<p>Luckily, there are mechanisms around to create stablecoins using only volatile assets for these situations. My favourite system is the idea behind MakerDAO and DAI, which I describe in an article <a href="/ethereum/2021/09/27/stablecoins-supply-demand.html">here</a>.</p>
<h3 id="so-if-the-current-system-is-so-great-why-do-we-even-need-cryptocurrencies">So if the current system is so great why do we even need cryptocurrencies?</h3>
<p>I think we need to become more nuanced thinkers in the cryptocurrency space, and start seeing the real properties of the systems we are trying to rebuild if we want to be successful. I think fiat currencies as we know them at the moment have been tremendously successful, as long as we see them for what they are: A hedge against short-term volatility rather than maximizing value long term.</p>
<p>I believe that crypto can vastly improve the current financial system, but hopefully not mainly by providing an asset with a limited supply (which won’t solve most of our most important problems). Instead we should make sure our assets are productive to maximize long term value, and create stablecoins for applications where volatility has to be avoided. This system improves on our current financial system because:</p>
<ol>
<li>It is much more transparent – anyone can verify balance sheets and exposures, not just specialized audit firms. This is pretty important because currently, the detailed exposures of banks are not public, which means depositors simply don’t know enough to about banks to make an informed decision which ones they can trust</li>
<li>We can make it fairer – giving everyone access at the same conditions. For example, why should banks have access to central bank accounts whereas normal people and companies don’t?</li>
<li>Governance can be improved, bringing everyone to the table when big decisions have to be made (like Quantitative Easing after the Global Financial Crisis)</li>
<li>Getting rid of the baggage (for example physical currency) and thus allowing more flexibility of the system; for example there is no technical need for inflation when all balances are electronic (though in practice, it might be required for psychological reasons or “price stickiness”)</li>
<li>And most importantly, creating a permissionless and censorship resistant system that anyone can participate in at all levels</li>
</ol>
<p>–</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>Vitalik pointed out that this will overstate the case against Bitcoin somewhat, because gold supply has increased much more (ca. 3x) since 1950 than Bitcoin will over a similar period. I do not think this will make up for the massive difference in returns between gold and the S&P 500, though. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>What we should really build is productive assets and stablecoinsOn supply and demand for stablecoins2021-09-27T09:00:00+00:002021-09-27T09:00:00+00:00https://dankradfeist.de/ethereum/2021/09/27/stablecoins-supply-demand<p><em>Special thanks to David Andolfatto, Vitalik Buterin, Chih-Cheng Liang, Barnabé Monnot and Danny Ryan for comments that helped me improve this essay</em></p>
<p>The value of a freely tradable asset is determined by supply and demand. This obviously applies to stocks and cryptocurrencies. But it also applies to any “stablecoin” we are trying to create. It even applies to traditional fiat currencies like the US Dollar or the Euro.</p>
<p>When I talk about stablecoins here, I am referring to decentralized, collateralized stablecoins like MakerDAO’s DAI – not to USDT or USDC, where the supply/demand problem is obvious. So how does MakerDAO balance supply and demand for stablecoins?</p>
<p>And how does this help us learn how central banks do this for fiat currencies?</p>
<h3 id="how-to-create-a-stablecoin">How to create a stablecoin</h3>
<p>Let’s understand how we can create a stablecoin if as a building block we only have assets which are subject to undesirably large volatility. Luckily we have a great example on how to do this by means of collateralized stablecoins, the prime example of which is <a href="https://makerdao.com/">MakerDAO</a>, the project behind the DAI stablecoin.</p>
<p>The idea behind this project is to create a token, called DAI, that tracks the value of one USD as closely as possible. Note that instead of using USD, we can track any other asset as well – RAI, as an example, tracks a time-averaged version of the Ether price. I suggest that long-term, the Ethereum community should strive to create an Oracle that tracks the prices of consumer goods in Ether, so that we can create a stablecoin that has nothing to do with any currently existing fiat currency and is thus truly global and independent. But as a starting point, using USD which is a denomination that most of the world understands intuitively as relatively stable was probably a very good idea.</p>
<p>How did MakerDAO manage to create this stablecoin, without any cash reserves in the form of bank accounts in USD and only the on-chain assets, which are all highly volatile? The core idea is the so-called Collateralized Debt Position, or CDP. It’s a margin position where someone can lock up a volatile asset – for example Ether – and in return create, or “borrow”, a number of DAI. The CDP essentially splits the value of the locked up Ether into two tranches:</p>
<p><img src="/assets/cdp.png" alt="CDP" /></p>
<ol>
<li>The first tranche is the “debt tranche” – this tranche is fixed in its USD value and belongs to whoever owns the actual DAI stablecoins</li>
<li>The second tranche is the equity tranche – it belongs to the owner of the CDP and is the value that is left once the first tranche is satisfied</li>
</ol>
<p>Notice I called them “debt” and “equity” here, because that’s the way we call them when we talk about companies doing the same thing: When companies need capital, they can raise “debt” – in the form of bank loans and bonds, typically – which is very predictable and gets preference (as in is paid back first using the remaining assets) when the company runs out of money. That’s why bonds (which are tradable debt) are quite stable in price: As long as the company doesn’t go bust, they will always be paid back. Equity is the value that’s left over once these debt positions are satisfied, and is traded in the form of stocks – which are much more volatile, because their value depends on the profitability of the company, not just it’s solvency.</p>
<p>The elegance of this system is that the equity position can absorb the volatility, so that the debt holder (which is whoever holds the DAI thus created) has a predictable value. As an illustration here, see what happens when the value of the 1 ETH that has been locked up in the above CDP fluctuates:
<img src="/assets/cdp_fluctuation.png" alt="CDP fluctuation" />
The equity holder gets a position that’s now highly volatile (and in return, if the value of ETH goes up, will get much enhanced returns). The “debt” part of the CDP stays nice and constant and is always worth 1000 USD, as long as the ETH price does not crash too rapidly.</p>
<p>This last part may look scary as if the red “equity” part of the line ever goes to the $1,000 line, the DAI debt could suddenly not be satisfied and thus the value of one DAI would fall below one USD. However, MakerDAO will actually liquidate CDP positions once they get too close to zero equity. This works by auctioning off the collateral to the highest bidder in DAI.</p>
<p>This means that in practice, MakerDAO can deal with extreme falls if they do not happen too rapidly; this has been tested repeatedly, for example in March 2020 when DAI held its peg despite a precipitous fall in crypto asset values.</p>
<p>(This largely describes the old version of DAI, single-collateral DAI (which only accepted ETH as collateral). The current instantiation, multi-collateral DAI, differs in that it also accepts other forms of collateral (which is great), some of which are centralized stablecoins (such as USDC) which is not so good in my opinion.)</p>
<h3 id="why-we-need-to-add-interest-rates-to-this">Why we need to add interest rates to this</h3>
<p>MakerDAO has a simple mechanism to make sure the long-term expected value of DAI should be one USD: In the case of a large deviation, the governance system can trigger global settlement, which will immediately give all DAI holders their current equivalent in ETH by tapping all the CDPs that secure it. However, this event can be far in the future and thus doesn’t guarantee that the instantaneous price is exactly one DAI.</p>
<p>Let us understand the goal that MakerDAO has with DAI: They want 1 DAI to always be worth 1 USD.</p>
<p>One might think: Oh but it would be ok if it sometimes is more than 1 USD right? As a matter of fact, this is also bad: If it costs more than 1 USD to get 1 DAI, then MakerDAO would have failed. Because if I can only get a DAI for 1.10 USD then it means it doesn’t act as a stablecoin for me – it can suddenly fall by 10% and I will lose that value when it goes back to its intended peg of 1 USD. It’s thus essential that the peg is always kept in both directions.</p>
<p>But like any freely traded asset, the value of DAI is determined by supply and demand.</p>
<p>What does it mean for a price to be determined by supply and demand? Let’s say we’re talking about a commodity like wheat with many independent buyers and sellers. The buyers of wheat follow a certain “demand curve”: The higher the price of wheat, the lower the quantity demanded; this is intuitively easy to see: if wheat becomes really expensive I will buy rice instead of flour. If wheat becomes crazy cheap then I will substitute other foods by using more wheat or even buy a few extra bags just in case I need it later. The behaviour of many consumers in aggregate makes this a smooth curve.</p>
<p>The supply curve looks at the other side, the producers of wheat who want to sell it into the market. The suppliers are farmers who grow wheat. They make a similar decision based on the current market price. If the price is low, they won’t grow wheat, or potentially put some of it in storage to sell later at a higher price. If the price is high, they can replace other crops with wheat or even grow it on fields that aren’t currently worthwhile because the yield is lower or it’s harder to harvest.</p>
<p>Conceptually the two curves can be drawn into a graph like this:</p>
<p><img src="/assets/supply_demand.png" alt="Supply and demand" /></p>
<p>Economists traditionally put price on the <script type="math/tex">y</script>-axis (vertical) in this graph, when as the independent variable it would usually be on the <script type="math/tex">x</script>-axis (horizontal).</p>
<p>There is a price at which both curves meet. In equilibrium, this is the expected price for the commodity if there isn’t any interference with the market. This is because if the price is lower, then not all demand can be satisfied, so producers will notice they can be more profitable by increasing their prices, thus raising the overall price. On the other hand, if the current price is higher, then there will be too much supply fighting for the few consumers wanting to buy wheat, and thus the producers who lower their prices will be the ones making a profit (or a lower loss) as the consumers will be turning to them. The only stable point is where the two curves meet.</p>
<p>The same applies to DAI – which can be traded freely on exchanges.</p>
<p>Supply of DAI is given by those people who are happy to take a CDP position, which basically means leveraging their volatile assets, in order to create more DAI, as well as anyone already holding DAI and wanting to sell. Demand comes from those who want the stability of keeping their value in DAI.</p>
<p>These two curves don’t necessarily meet at a price of one USD per DAI.</p>
<p>As an example, if the market for Ether is very bullish and many people think it will go up, then it probably means that there is little demand for the stability that holding DAI provides and a high demand for leveraged Ether positions. People who are very bullish on Ether would be tempted to leverage their positions to profit even more when the price increases. In this kind of environment, so many people want to take out CDPs and create DAI that there are not enough people interested in actualling using all the DAI. The value of DAI would fall below the peg, which is undesirable.</p>
<p>MakerDAO can correct this by adding a positive interest rate (“savings rate”) for holding DAI, rewarding the holders and charging those who take the margin position. This makes it more attractive to hold DAI. You may think ETH is a great investment, but it’s volatile, so maybe DAI with a 5% savings interest would seem attractive. If it’s not 5%, then maybe 10% is.
At some value for this interest rate, the demand for DAI will increase enough (and the supply in form of CDP decrease enough) such that the value of DAI will return to the intended peg.</p>
<p>But the reverse is also possible – in an environment where many people prefer the stability (maybe in a “bear market” where holding Ether isn’t as attractive), a negative interest rate makes holding DAI less attractive and thus reduces the demand. On the other hand, taking out a CDP becomes more attractive when you actually get paid for it. You may be scared of taking out a 1000$ loan against your ETH, but what if you got paid 10% or 100$ per year for it?</p>
<p>So we now effectively have another dimension in order to change supply and demand for DAI – the savings interest rate. A lower rate (even negative) rate will decrease demand and increase supply, leading to a lower DAI price. A higher rate does the opposite and increases the price of DAI. In order to move the price to 1 USD, we just have to adjust the interest rate until the prices agree.</p>
<p>Here is a graphic that illustrates how this works:</p>
<p><img src="/assets/supply_demand_shift.png" alt="Supply and demand change with interest rate" /></p>
<p>On the left, we have supply and demand curves at an interest rate of 1%. The curves meet at a price of 0.95 USD, which is the current fair market price of DAI and thus too low. In this situation MakerDAO would need to raise the interest rate. By raising the interest to 2% (on the right), the CDPs become less attractive (shifting supply) and holding DAI becomes more attractive, thus making the curves meet at the desired price 1.00 USD.</p>
<p>In the light of this, I am very happy that MakerDAO has after a long time decided to implement the ability to support <a href="https://mips.makerdao.com/mips/details/MIP20">negative interest rates</a>. They are essential when a lot of stability is demanded. In fact, not having this in the past has required the very unfortunate decision to use centralized stablecoins such as USDC to back DAI, otherwise the demand could not have been satisfied and it would have shot above the peg. Hopefully, long term, this will be reversed.</p>
<p>To summarize, the interest rate is a mechanism that balances the demand and supply of the stablecoin. An ideal system should simply pick a rate that equates supply and demand – this interest rate would represent the fair market price for keeping value stable. Depending on the overall economic situation, this interest rate can be either positive or negative.</p>
<h3 id="an-analogy-to-fiat-currencies">An analogy to fiat currencies</h3>
<p>“Fiat” currency is actually a huge misnomer for our state currencies. “Fiat” implies that someone just creates a large amount of (what we in the cryptocurrency ecosystem would call) tokens and – by “fiat” (latin “let it be done”) – tells everyone that this is now money.</p>
<p>However this is not really how fiat currency works. Fiat currencies are actually to some extent “collateralized stablecoins” as described above, with some extra complications. As many commenters have noted in the past, we should be calling them “credit currencies” instead.</p>
<p>To see this, we need to understand that traditional money consists of two different components (there are more but these two will give the idea how it works):</p>
<ul>
<li>Central bank money, which consists of reserve accounts (that banks have with the central bank) as well as all the physical money (bills, coins) in circulation; this is often denoted M0 (and can properly be called “fiat” currency)</li>
<li>Bank deposits, which is basically the money you have in your bank account, and similar liquid deposits. This is called M1.</li>
</ul>
<p>But what actually is M1 money? It’s nothing else but “debt” that your bank owes you. This debt is often created by someone taking out a loan from the bank: E.g. when you take a mortgage, two accounts are created: One that says “bank owes you money” and the other one “you owe the bank money” and they cancel each other out. Your bank’s net position hasn’t changed, although it has become riskier (more leveraged) through the process. And new deposits have been created, thus enlarging the M1 quantity.</p>
<p>But that mortage is backed – collateralized – by both your income and the property it’s taken out for. In effect, each loan a bank gives out is very similar to our collateralized debt position above. When you take out your mortgage for 200,000 USD, your CDP is:</p>
<ul>
<li>You are long 1 house</li>
<li>You are short 200,000 USD</li>
</ul>
<p>Now it looks much more like a CDP. While central banks and states have other tools to change supply and control inflation, this debt mechanism is a powerful constraint that can dynamically adjust the quantity while keeping the value of the currency more or less unchanged.</p>
<h3 id="so-are-negative-rates-and-inflation-not-a-scam">So are negative rates and inflation not a scam?</h3>
<p>As we have seen previously, DAI sometimes needs negative interest rates to maintain the peg. In the wake of the financial crisis of 2008, people were surprised that interest rates on bank accounts, and indeed even central bank interests, can be negative. But is this really that surprising?</p>
<p>Central banks have more than a single lever to adjust supply and demand for their currencies, but interest rates are still an important one. Negative interest rates send a signal to the market that rebalances the equilibrium towards lower demand for stable currency and higher supply by means of people taking debt in order to invest it in ventures.</p>
<p>Furthermore, inflation is basically a negative interest rate on physical cash (bills and coins), which is necessary because we don’t have any way of applying it directly. If all balances were electronic, we could equivalently also just apply the negative interest rates directly to the balances and not have any inflation. (This ignores price stickiness, which is another problem that probably also favors some form of inflation)</p>
<h2 id="conclusion">Conclusion</h2>
<p>MakerDAO has demonstrated that even if you only have a volatile asset, like ETH, you can build a stable currency on top. For simplicity, a peg to the US Dollar was chosen, but it doesn’t have to be a currency. Any measure of value could be used, as long as we have a way to find a reasonably objective oracle for it.</p>
<p>I don’t believe that assets that are only defined by their limited supply – such as gold or Bitcoin – are very good “stores of value”. Historically speaking, the S&P 500 has vastly outperformed gold <sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup> at lower volatility. I don’t think this will be different for Bitcoin and other “limited supply” assets. If what you want to do is maximize value over long timescale, productive assets (which Ethereum will be after EIP1559 and the merge) are a much better bet.</p>
<p>If instead you want stability over short term, you need an explicit mechanism that guarantees that; stablecoins are one, and fiat has similar mechanism. But someone will have to take the other side, and you will probably have to pay for it by having lower or even negative returns. That’s the price for stability.</p>
<p>You can also do something in between, like <a href="https://reflexer.finance/">Reflexer Labs RAI</a>. What I don’t see is how gold or Bitcoin, simply by having a fixed supply, provide something superior. They don’t. They will be strictly inferior by providing less returns at higher volatility than productive assets and the stable synthetix we can build using them. I wrote an essay about this topic: <a href="/ethereum/2021/09/27/store-of-value-from-limited-supply.html">Just because it has a fixed supply doesn’t make it a good store of value</a></p>
<p>–</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>Starting in 1950, investing $35 in gold (one ounce) would have yielded $1765 (price as of June 20 2021 from <a href="https://www.bullionbypost.eu/gold-price/alltime/ounces/USD/">here</a>) vs <a href="https://www.officialdata.org/us/stocks/s-p-500/1950?amount=35&endYear=2021">$74,418.65</a> for investing in an S&P 500 tracker. Both yield positive returns even after accounting for the ca. 90% inflation of the USD, but the S&P 500 is much better at 212x real returns vs only 5x for gold. Also gold <a href="https://seekingalpha.com/article/4296091-gold-vs-stocks">is more volatile than stocks</a>. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>Special thanks to David Andolfatto, Vitalik Buterin, Chih-Cheng Liang, Barnabé Monnot and Danny Ryan for comments that helped me improve this essayInner Product Arguments2021-07-27T23:00:00+00:002021-07-27T23:00:00+00:00https://dankradfeist.de/ethereum/2021/07/27/inner-product-arguments<p>中文版本: <a href="/ethereum/2021/11/18/inner-product-arguments-mandarin.html">内积证明</a></p>
<h1 id="introduction">Introduction</h1>
<p>You might have heard of Bulletproofs: It’s a type of zero knowledge proof that is used for example by Monero, and that does not require a trusted setup. The core of this proof system is the Inner Product Argument <sup id="fnref:2"><a href="#fn:2" class="footnote">1</a></sup>, a trick that allows a prover to convince a verifier of the correctness of an “inner product”. An inner product is the component by component product of two vectors:</p>
<script type="math/tex; mode=display">\vec a \cdot \vec b = a_0 b_0 + a_1 b_1 + a_2 b_2 + \cdots + a_{n-1} b_{n-1}</script>
<p>where <script type="math/tex">\vec a = (a_0, a_1, \ldots, a_{n-1})</script> and <script type="math/tex">\vec b = (b_0, b_1, \ldots, b_{n-1})</script>.</p>
<p>One interesting case is where we set the vector <script type="math/tex">\vec b</script> to be the powers of some number <script type="math/tex">z</script>, i.e. <script type="math/tex">\vec b = (1, z, z^2, \ldots, z^{n-1})</script>. Then the inner product becomes the evaluation of the polynomial</p>
<script type="math/tex; mode=display">f(X) = \sum_{i=1}^{n-1} a_i X^i</script>
<p>at <script type="math/tex">z</script>.</p>
<p>Inner Product Arguments work on <em>Pedersen Commitments</em>. I have previously written about <a href="/ethereum/2020/06/16/kate-polynomial-commitments.html">KZG commitments</a>, and Pedersen commitments are similar in that the commitment is in an elliptic curve. However a difference is that they do not require a trusted setup. Here is a comparison of the KZG commitment scheme and using Pedersen combined with an Inner Product Argument as a Polynomial Commitment Scheme (PCS):</p>
<table>
<thead>
<tr>
<th> </th>
<th>Pedersen+IPA</th>
<th>KZG</th>
</tr>
</thead>
<tbody>
<tr>
<td>Assumption</td>
<td>Discrete log</td>
<td>Bilinear group</td>
</tr>
<tr>
<td>Trusted setup</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>Commitment size</td>
<td>1 Group element</td>
<td>1 Group element</td>
</tr>
<tr>
<td>Proof size</td>
<td>2 log n Group elements</td>
<td>1 Group element</td>
</tr>
<tr>
<td>Verification</td>
<td>O(n) group operations</td>
<td>1 Pairing</td>
</tr>
</tbody>
</table>
<p>Basically, compared to KZG commitments, our commitment scheme is less efficient. Proofs are larger (<script type="math/tex">O(\log n)</script>), which wouldn’t be the end of the world as logarithmic is still very small. But unfortunately, the verifier has to do a linear amount of work, so they are not succinct. This makes them impractical for some applications. However in some cases this can be worked around.</p>
<ul>
<li>One example is my writeup on <a href="/ethereum/2021/06/18/pcs-multiproofs.html">multiopenings</a>. In this case, the trick is that you can aggregate many openings into a single one.</li>
<li>The Halo system <sup id="fnref:1"><a href="#fn:1" class="footnote">2</a></sup>, where the linear cost of many openings is aggregated</li>
</ul>
<p>In both of these examples, the trick is to amortize many openings. If you only want to open a single polynomial, then it’s tough and you have to incur the full cost, though.</p>
<p>However, the big advantage is that Pedersen and Inner Product Arguments come with much fewer assumptions, in particular a pairing is not needed and they don’t require a trusted setup.</p>
<h1 id="pedersen-commitments">Pedersen commitments</h1>
<p>Before we can discuss Inner Product Arguments, we need to discuss the data structure that they operate on: Pedersen commitments. In order to use Pedersen commitments, we need an elliptic curve <script type="math/tex">G</script>. Let’s quickly remind ourselves what you can do in an elliptic curve (I will use additive notation because I think it is the more natural one):</p>
<ol>
<li>You can add two elliptic curve elements <script type="math/tex">g_0 \in G</script> and <script type="math/tex">g_1 \in G</script>:
<script type="math/tex">h = g_0 + g_1</script></li>
<li>You can multiply an element <script type="math/tex">g \in G</script> with a scalar <script type="math/tex">a \in \mathbb F_p</script>, where <script type="math/tex">p</script> is the curve order of <script type="math/tex">G</script> (i.e. the number of elements):
<script type="math/tex">h = a g</script></li>
</ol>
<p>There is no way to compute the “product” of two curve elements: the operation “<script type="math/tex">h * h</script>” is not defined, so you cannot compute “<script type="math/tex">h * h = a g * a g = a^2 g</script>”; as opposed to multiplying by a scalar; so <script type="math/tex">2 h = 2 a g</script>, for example, is easy to compute.</p>
<p>Another important property is that there is no efficient algorithm to compute “discrete logarithms”. The meaning of this is that given <script type="math/tex">h</script> and <script type="math/tex">g</script> with the property that <script type="math/tex">h=ag</script>, if you don’t know <script type="math/tex">a</script> it is computationally infeasible to find <script type="math/tex">a</script>. We call <script type="math/tex">a</script> the discrete logarithm of <script type="math/tex">h</script> with respect to <script type="math/tex">g</script>.</p>
<p>Pedersen commitments make use of this infeasibility to construct a commitment scheme. Let’s say you have two points <script type="math/tex">g_0</script> and <script type="math/tex">g_1</script> and their discrete logarithm with respect to each other (i.e. the <script type="math/tex">x \in \mathbb F_p</script> such that <script type="math/tex">g_1 = x g_0</script>) is unknown, then we can commit to two numbers <script type="math/tex">a_0, a_1 \in \mathbb F_p</script>:</p>
<script type="math/tex; mode=display">C = a_0 g_0 + a_1 g_1</script>
<p><script type="math/tex">C</script> is an element of the elliptic curve <script type="math/tex">G</script>.</p>
<p>To reveal the commitment, the prover gives the verifier the numbers <script type="math/tex">a_0</script> and <script type="math/tex">a_1</script>. The verifier computes <script type="math/tex">C</script> and if it matches will accept.</p>
<p>The central property of a commitment scheme is that it is binding. So given <script type="math/tex">C=a_0 g_0 + a_1 g_1</script>, could a cheating prover come up with <script type="math/tex">b_0, b_1 \in \mathbb F_p</script> such that the verifier will accept them, i.e. such that <script type="math/tex">C = b_0 g_0 + b_1 g_1</script> but with <script type="math/tex">b_0, b_1 \not= a_0, a_1</script>?</p>
<p>If someone can do this, then they could also find the discrete logarithm. Here is why: We know that <script type="math/tex">a_0 g_0 + a_1 g_1 = b_0 g_0 + b_1 g_1</script>, and by regrouping the terms on both sides of the equation we get</p>
<script type="math/tex; mode=display">(a_0 - b_0) g_0 = (b_1 - a_1) g_1</script>
<p>Either <script type="math/tex">a_0 - b_0</script> or <script type="math/tex">b_1 - a_1</script> have to be not equal to zero. Let’s say it’s <script type="math/tex">a_0 - b_0</script>, then we get:</p>
<script type="math/tex; mode=display">g_0 = \frac{b_1 - a_1}{a_0 - b_0} g_1 = x g_1</script>
<p>for <script type="math/tex">x = \frac{b_1 - a_1}{a_0 - b_0}</script>. Thus we’ve found <script type="math/tex">x</script>. Since we know this is a hard problem, in practice no attacker can perform this.</p>
<p>This means it’s computationally infeasible for an attacker to find alternative <script type="math/tex">b_0, b_1</script> to reveal for the commitment <script type="math/tex">C</script>. (They definitely do exist, they are just computationally infeasible to find – similar to finding a collision for a hash function).</p>
<p>We can generalize this and commit to a vector, i.e. a list of scalars <script type="math/tex">a_0, a_1, \ldots, a_{n-1} \in \mathbb F_p</script>. We just need a “basis”, i.e. an equal number of group elements that don’t have known discrete logarithms between them. Then we can compute the commitment</p>
<script type="math/tex; mode=display">C = a_0 g_0 + a_1 g_1 + a_2 g_2 + \ldots + a_{n-1} g_{n-1}</script>
<p>This gives us a vector commitment, although with quite a bad complexity: In order to reveal any element, all elements of the vector have to be revealed. But there is one redeeming property: The commitment scheme is additively homomorphic. This means that if we have another commitment <script type="math/tex">D = b_0 g_0 + b_1 g_1 + b_2 g_2 + \ldots + b_{n-1} g_{n-1}</script>, then it’s possible to just add the two commitments to get a new commitment to the sum of the two vectors <script type="math/tex">\vec a</script> and <script type="math/tex">\vec b</script>:</p>
<script type="math/tex; mode=display">C + D = (a_0 + b_0) g_0 + (a_1 + b_1) g_1 + (a_1 + b_1) g_2 + \ldots + (a_{n-1} + b_{n-1}) g_{n-1}</script>
<p>Thanks to this additive homomorphic property, this vector commitment actually turns out to be useful.</p>
<h1 id="inner-product-argument">Inner Product Argument</h1>
<p>The basic strategy of the Inner Product Argument is “divide and conquer”: Take the problem and instead of completely solving it, turn it into a smaller one of the same type. At some point, it becomes so small that you can simply reveal everything and prove that the instance is correct.</p>
<p>At each step, the problem size halves. This ensures that after <script type="math/tex">\log n</script> steps, the problem is reduced to size one, so it can be proved trivially.</p>
<p>The idea is that we want to prove that a commitment <script type="math/tex">C</script> is of the form</p>
<script type="math/tex; mode=display">C = \vec a \cdot \vec g + \vec b \cdot \vec h + (\vec a \cdot \vec b) q</script>
<p>where <script type="math/tex">\vec g = (g_0, g_1, \ldots, g_{n-1})</script> and <script type="math/tex">\vec h = (h_0, h_1, \ldots, h_{n-1})</script> as well as <script type="math/tex">q</script> are our “basis”, i.e. they are group elements in <script type="math/tex">G</script> and none of their discrete logarithms with respect to each other are known. We also introduced the new notation <script type="math/tex">\vec a \cdot \vec g</script> for a product between a vector of scalars (<script type="math/tex">\vec a</script>) and another vector of group elements (<script type="math/tex">\vec g</script>), and it is defined as</p>
<script type="math/tex; mode=display">\vec a \cdot \vec g = a_0 g_0 + a_1 g_1 + \cdots + a_{n-1} g_{n-1}</script>
<p>So essentially, we are proving that <script type="math/tex">C</script> is a commitment to</p>
<ul>
<li>a vector <script type="math/tex">\vec a</script> with basis <script type="math/tex">\vec g</script></li>
<li>a vector <script type="math/tex">\vec b</script> with basis <script type="math/tex">\vec h</script> and</li>
<li>their inner product <script type="math/tex">\vec a \cdot \vec b</script> with respect to the basis <script type="math/tex">q</script>.</li>
</ul>
<p>This in itself does not seem very useful – in most applications we want the verifier to know <script type="math/tex">\vec a \cdot \vec b</script>, and not just have it hidden in some commitment. But this can be remedied with a small trick which I will come to below.</p>
<h2 id="the-argument">The argument</h2>
<p>We want the prover to convince the verifier that <script type="math/tex">C</script> is of the form <script type="math/tex">C = \vec a \cdot \vec g + \vec b \cdot \vec h + (\vec a \cdot \vec b) q</script>. As I mentioned before, instead of doing this outright, we will only reduce the problem by computing another commitment <script type="math/tex">C'</script> in such a way that if the property holds for <script type="math/tex">C'</script>, then it also holds for <script type="math/tex">C</script>.</p>
<p>In order to do this, the prover and the verifier play a little game. The prover commits to certain properties, after which the verifier sends a challenge, which leads to the next commitment <script type="math/tex">C'</script>. Describing it as a game does not mean the proof has to be interactive though: The Fiat-Shamir construction allows us to turn interactive proofs into non-interactive one, by replacing the challenge with a collision-resistant hash function of the commitments.</p>
<h3 id="statement-to-prove">Statement to prove</h3>
<p>The commitment <script type="math/tex">C</script> has the form <script type="math/tex">C = \vec a \cdot \vec g + \vec b \cdot \vec h + (\vec a \cdot \vec b) q</script> with respect to the basis given by <script type="math/tex">\vec g, \vec h, q</script>. We call the fact that <script type="math/tex">C</script> has this form the “Inner Product Property”.</p>
<h3 id="reduction-step">Reduction step</h3>
<p>Let <script type="math/tex">m = \frac{n}{2}</script></p>
<p>The prover computes</p>
<script type="math/tex; mode=display">z_L = a_m b_0 + a_{m+1} b_1 + \cdots + a_{n-1} b_{m-1} = \vec a_R \cdot \vec b_L \\
z_R = a_0 b_m + a_{1} b_{m+1} + \cdots + a_{m-1} b_{n-1} = \vec a_L \cdot \vec b_R</script>
<p>where we’ve defined <script type="math/tex">\vec a_L</script> as the “left half” of the vector <script type="math/tex">\vec a</script> and <script type="math/tex">\vec a_R</script> the “right half” and analogously for <script type="math/tex">\vec b</script>.</p>
<p>Then the prover computes the following commitments:</p>
<script type="math/tex; mode=display">C_L = \vec a_R \cdot \vec g_L + \vec b_L \cdot \vec h_R + z_L q \\
C_R = \vec a_L \cdot \vec g_R + \vec b_R \cdot \vec h_L + z_R q \\</script>
<p>and send them to the verifier. Then the verifier sends the challenge <script type="math/tex">x \in \mathbb F_p</script> (when using the Fiat-Shamir construction to make this non-interactive, this means that <script type="math/tex">x</script> would be the hash of <script type="math/tex">C_L</script> and <script type="math/tex">C_R</script>). The prover uses this to compute the updated vectors</p>
<script type="math/tex; mode=display">\vec a' = \vec a_L + x \vec a_R \\
\vec b' = \vec b_L + x^{-1} \vec b_R</script>
<p>which have half the length.</p>
<p>Now the verifier computes the new commitment:</p>
<script type="math/tex; mode=display">C' = x C_L + C + x^{-1} C_R</script>
<p>as well as the updated basis</p>
<script type="math/tex; mode=display">\vec g' = \vec g_L + x^{-1} \vec g_R \\
\vec h' = \vec h_L + x \vec h_R</script>
<p>Now, <em>if</em> the new commitment <script type="math/tex">C'</script> has the property that it is of the form <script type="math/tex">C' = \vec a' \cdot \vec g' +\vec b' \cdot \vec h' + \vec a' \cdot \vec b' q</script> – then the commitment <script type="math/tex">C</script> fulfills the originall claim.</p>
<p>All the vectors have halved in size – so we have achieved something. From here we replace <script type="math/tex">C:=C'</script>, <script type="math/tex">\vec g := \vec g'</script> and <script type="math/tex">\vec h := \vec h'</script> and repeat this step.</p>
<p>I will below go through the maths on why this works, but Vitalik also made a nice <a href="https://twitter.com/VitalikButerin/status/1371844878968176647">visual representation</a> that I recommend to get an intuition.</p>
<h3 id="final-step">Final step</h3>
<p>When we repeat the step above, we will reduce <script type="math/tex">n</script> by a factor of two each time. At some point, we will encounter <script type="math/tex">n=1</script>. At this point we don’t repeat the step anymore. Instead the prover will send <script type="math/tex">\vec a</script> and <script type="math/tex">\vec b</script>, which in fact are now only a single scalar each. Then the verifier can simply compute</p>
<script type="math/tex; mode=display">D = a g + b h + a b q</script>
<p>and accept the statement if this is indeed equal to <script type="math/tex">C</script>, or reject if it is not.</p>
<h3 id="correctness-and-soundness">Correctness and soundness</h3>
<p>Above I claimed that if <script type="math/tex">C'</script> has the desired form, then it follows that <script type="math/tex">C</script> also has it. I now want to show why this is the case. In order to do this, we need to look at two things:</p>
<ul>
<li><em>Correctness</em> – i.e. given a prover who follows the protocol, can they always convince the verifier that the statement is correct; and</li>
<li><em>Soundness</em> – i.e. a dishonest prover cannot convince the verifier of an incorrect statement, except with a very small probability.</li>
</ul>
<p>Let’s start with correctness. This assumes that the prover is doing everything according to the protocol. Since the prover is following the protocol, we know that <script type="math/tex">C = \vec a \cdot \vec g + \vec b \cdot \vec h + (\vec a \cdot \vec b) q</script> with respect to the basis given by <script type="math/tex">\vec g, \vec h, q</script>. We need to show that then <script type="math/tex">C'= \vec a' \cdot \vec g' +\vec b' \cdot \vec h' + \vec a' \cdot \vec b' q</script>.</p>
<p>The verifier computes <script type="math/tex">C' = x C_L + C + x^{-1} C_R</script>.</p>
<script type="math/tex; mode=display">C' = x C_L + C + x^{-1} C_R \\
= x ( \vec a_R \cdot \vec g_L + \vec b_L \cdot \vec h_R + z_L q) \\
+ \vec a_L \cdot \vec g_L + \vec a_R \cdot \vec g_R + \vec b_L \cdot \vec h_L + \vec b_R \cdot \vec h_R + \vec a \cdot \vec b q \\
+ x^{-1} (\vec a_L \cdot \vec g_R + \vec b_R \cdot \vec h_L + z_R q)\\
= (x \vec a_R + \vec a_L)\cdot(\vec g_L + x^{-1} \vec g_R) \\
+ (\vec b_L + x^{-1} \vec b_R)\cdot(\vec h_L + x \vec h_R) \\
+ (x z_L + \vec a \cdot \vec b + x^{-1} z_R) q \\
= (x \vec a_R + \vec a_L)\cdot \vec g' + (\vec b_L + x^{-1} \vec b_R)\cdot \vec h' + (x z_L + \vec a \cdot \vec b + x^{-1} z_R) q</script>
<p>So in order for the commitment to have the Inner Product Property, we need to verify that <script type="math/tex">(x \vec a_R + \vec a_L) \cdot (\vec b_L + x^{-1} \vec b_R) = x z_L + \vec a \cdot \vec b + x^{-1} z_R</script>. This is true because</p>
<script type="math/tex; mode=display">(x \vec a_R + \vec a_L) \cdot (\vec b_L + x^{-1} \vec b_R) \\
= x \vec a_R \cdot \vec b_L + \vec a_L \cdot \vec b_L + \vec a_R \cdot \vec b_R + x^{-1} \vec a_L \cdot \vec b_R \\
= x z_L + \vec a \cdot \vec b + x^{-1} z_R</script>
<p>This concludes the proof of correctness. Now in order to prove soundness, we need the property that a prover can’t start with a commitment <script type="math/tex">C</script> that does not fulfill the Inner Product Property and end up with a <script type="math/tex">C'</script> that does by going through the reduction step.</p>
<p>So let’s assume that the prover committed to <script type="math/tex">C=\vec a \cdot \vec g + \vec b \cdot \vec h + r q</script> for some <script type="math/tex">r \not= \vec a \cdot \vec b</script>. If we go through the same process as before, we find</p>
<script type="math/tex; mode=display">C' = (x \vec a_R + \vec a_L)\cdot \vec g' + (\vec b_L + x^{-1} \vec b_R)\cdot \vec h' + (x z_L + r + x^{-1} z_R) q</script>
<p>So now let’s assume that the prover managed to cheat, and thus <script type="math/tex">C'</script> fulfills the Inner Product Property. That means that</p>
<script type="math/tex; mode=display">(x \vec a_R + \vec a_L) \cdot (\vec b_L + x^{-1} \vec b_R) = x z_L + r + x^{-1} z_R</script>
<p>Expanding the left hand side, we get</p>
<script type="math/tex; mode=display">x \vec a_R \cdot \vec b_L + \vec a \cdot \vec b + x^{-1} \vec a_L \cdot \vec b_R = x z_L + r + x^{-1} z_R</script>
<p>Note that the prover can choose <script type="math/tex">z_L</script> and <script type="math/tex">z_R</script> freely, so we cannot assume that they will be according to the above definitions.</p>
<p>Multiplying by <script type="math/tex">x</script> and moving everything to one side we get a quadratic equation in <script type="math/tex">x</script>:</p>
<script type="math/tex; mode=display">x^2 ( \vec a_R \cdot \vec b_L - z_L) + x (\vec a \cdot \vec b - r) + (\vec a_L \cdot \vec b_R - z_R )</script>
<p>Unless all the terms are zero, this equation has at most two solutions <script type="math/tex">x \in \mathbb F_p</script>. But the verifier chooses <script type="math/tex">x</script> after the prover has already committed to their values <script type="math/tex">r</script>, <script type="math/tex">z_L</script> and <script type="math/tex">z_R</script>. The probability that the prover can successfully cheat is thus extremely small; we typically choose the field <script type="math/tex">\mathbb F_p</script> to be of size ca. <script type="math/tex">2^{256}</script>, so the probability that the verifier chooses a value for <script type="math/tex">x</script> such that this equation holds, when the values were not chosen according to the protocol, is vanishingly small.</p>
<p>This concludes the soundness proof.</p>
<h2 id="only-compute-basis-changes-at-the-end">Only compute basis changes at the end</h2>
<p>The verifier has to do two things each round: Compute the challenge <script type="math/tex">x</script> and compute the updates bases <script type="math/tex">\vec g'</script> and <script type="math/tex">\vec h'</script>. However, updating the basis <script type="math/tex">g</script> at every round is inefficient. Instead the verifier can simply keep track of the challenge values <script type="math/tex">x_1</script>, <script type="math/tex">x_2</script>, up to <script type="math/tex">x_{\ell}</script> that they will encounter during the <script type="math/tex">\ell</script> rounds.</p>
<p>Let’s call the basis after round <script type="math/tex">k</script> <script type="math/tex">\vec g_k, \vec h_k</script>. The elements <script type="math/tex">g_\ell</script> and <script type="math/tex">h_\ell</script> are scalars (or vectors of length one) because we end the protocol once our vectors have reached length one. Computing <script type="math/tex">g_\ell</script> from <script type="math/tex">\vec g_0</script> is a multiscalar multiplication (MSM) of length <script type="math/tex">n</script>. The scalar factors for <script type="math/tex">\vec g_0</script> are the coefficients of the polynomial</p>
<script type="math/tex; mode=display">f_g(X) = \prod_{i=0}^\ell \left(1+x^{-1}_{\ell-i} X^{2^{i}}\right)</script>
<p>and the scalar factors for <script type="math/tex">\vec h_0</script> are given by</p>
<script type="math/tex; mode=display">f_h(X) = \prod_{i=0}^\ell \left(1+x_{\ell-i} X^{2^{i}}\right)</script>
<h1 id="using-inner-product-arguments-to-evaluate-polynomials">Using Inner Product Arguments to evaluate polynomials</h1>
<p>For our main application – evaluating a polynomial defined by <script type="math/tex">f(x) = \sum_{i=1}^{n-1} a_i x^i</script> at a point <script type="math/tex">z</script> – we want to make some small additions to this protocol.</p>
<ul>
<li>Most importantly, we want to know the result <script type="math/tex">f(z) = \vec a \cdot \vec b</script>, and not just that <script type="math/tex">C</script> has the “Inner Product Property”</li>
<li><script type="math/tex">\vec b = (1, z, z^2, ..., z^{n-1})</script> is known to the verifier. We can thus make things a bit easier by removing it from the commitment</li>
</ul>
<h2 id="how-to-construct-the-commitment">How to construct the commitment</h2>
<p>If we want to verify a polynomial evaluation for the polynomial <script type="math/tex">f(x) = \sum_{i=1}^{n-1} a_i x^i</script>, then we are typically working from a commitment <script type="math/tex">F = \vec a \cdot \vec g</script>. The prover would send the verifier the evaluation <script type="math/tex">y=f(z)</script>.</p>
<p>So it seems like the verifier can just compute the initial commitment <script type="math/tex">C=\vec a \cdot \vec g + \vec b \cdot \vec h + \vec a \cdot \vec b q = F + \vec b \cdot \vec h + f(z) q</script>, since they know <script type="math/tex">\vec b = (1, z, z^2, ..., z^{n-1})</script>, and start the protocol.</p>
<p>But not so fast. In most cases, <script type="math/tex">F</script> will be a commitment that is generated by the prover. A malicious prover could cheat by, for example, committing to <script type="math/tex">F = \vec a \cdot \vec g + tq</script>. In this case, they would be able to prove that <script type="math/tex">f(z) = y - t</script>, because they have effectively shifted the result.</p>
<p>To prevent this, we need to do a small change to the protocol. After receiving the commitment <script type="math/tex">F</script> and the evaluation <script type="math/tex">y</script>, the verifier generates a scalar <script type="math/tex">w</script> and rescales the basis <script type="math/tex">q:=wq</script>. Afterwards the protocol can proceed as usual. Because the prover can’t predict what <script type="math/tex">w</script> is going to be, they can’t succeed (except with very small probability) at manipulating the result to be something other then <script type="math/tex">f(z)</script>.</p>
<p>Note that we also need to stop the prover from manipulating the vector <script type="math/tex">\vec b</script> if what we want is a generic inner product – but for a polynomial evaluation, we can simply get rid of that part alltogether so I won’t go into the details.</p>
<h2 id="how-to-get-rid-of-the-second-vector">How to get rid of the second vector</h2>
<p>Note that the verifier knows the vector <script type="math/tex">\vec b = (1, z, z^2, ..., z^{n-1})</script> if what we want is to compute a polynomial evaluation. Given the challenges <script type="math/tex">x_0, x_1, \ldots, x_\ell</script> they can simply compute the final result <script type="math/tex">b_\ell</script> using the same technique as demonstrated in “compute the basis change at the end”.</p>
<p>We can thus remove the second vector from all commitments and simply compute <script type="math/tex">b_\ell</script>. This means the verifier has to be able to compute the final version <script type="math/tex">b_\ell</script> from the initial vector <script type="math/tex">\vec b_0 = (1, z, z^2, ..., z^{n-1})</script>. Since the folding process for <script type="math/tex">\vec b</script> is the same as that for the basis vector <script type="math/tex">\vec g</script>, the coefficients of the previously defined polynomial <script type="math/tex">f_g</script> will define the linear combination, in other words <script type="math/tex">b_\ell=f_g(z)</script>.</p>
<h2 id="creating-an-ipa-for-a-polynomial-in-coefficient-form">Creating an IPA for a polynomial in coefficient form</h2>
<p>So far, we have used an Inner Product Argument to evaluate a polynomial that is committed to by its coefficients, which are the <script type="math/tex">f_i</script> for a polynomial defined by <script type="math/tex">f(X) = \sum_{i=0}^{n-1} f_i X^i</script>. However, often we want to work with a polynomial that is defined using its evaluations on a domain <script type="math/tex">x_0, x_1, \ldots, x_{n-1}</script>. Since any polynomial of degree less than <script type="math/tex">n-1</script> is uniquely defined by the evaluations <script type="math/tex">f(x_0), f(x_1), \ldots, f(x_{n-1})</script> these two are completely equivalent. However, transforming between the two can be computationally expensive: it costs <script type="math/tex">O(n \log n)</script> operations if the domain admits an efficient Fast Fourier Transform, and otherwise it’s <script type="math/tex">O(n^2)</script>.</p>
<p>To avoid this cost, we try to simply never change to coefficient form. This can be done by changing the commitment to <script type="math/tex">f</script> by committing to the evaluations instead of the coefficients:</p>
<script type="math/tex; mode=display">C = f(x_0) g_0 + f(x_1) g_1 + \cdots + f(x_{n-1}) g_{n-1}</script>
<p>This means that our <script type="math/tex">\vec a</script> vector in the IPA is now given by
<script type="math/tex">\vec a = (f(x_0), f(x_1), \ldots, f(x_{n-1}))</script></p>
<p>The <a href="/ethereum/2021/06/18/pcs-multiproofs.html#evaluating-a-polynomial-in-evaluation-form-on-a-point-outside-the-domain">barycentric formula</a> allows us now to compute an IPA to evaluate a polynomial using this new commitment. It says that</p>
<script type="math/tex; mode=display">f(z) = A(z)\sum_{i=0}^{n-1} \frac{f(x_i)}{A'(x_i)} \frac{1}{z-x_i}</script>
<p>If we choose the vector <script type="math/tex">\vec b</script> to be</p>
<script type="math/tex; mode=display">b_i = \frac{A(z)}{A'(x_i)} \frac{1}{z-x_i}</script>
<p>we get that <script type="math/tex">\vec a \cdot \vec b = f(z)</script>, and thus an IPA with this vector can be used to prove the evaluation of a polynomial which is itself in evaluation form. Other than this, the strategy is exactly the same.</p>
<div class="footnotes">
<ol>
<li id="fn:2">
<p>Bootle, Cerulli, Chaidos, Groth, Petit: <a href="https://eprint.iacr.org/2016/263.pdf">Efficient Zero-Knowledge Arguments forArithmetic Circuits in the Discrete Log Setting</a> <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:1">
<p>Bowe, Grigg, Hopwood: <a href="https://eprint.iacr.org/2019/1021.pdf">Recursive Proof Composition without a Trusted setup</a> <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>中文版本: 内积证明