A rollup-centric ethereum roadmap

A zk zk rollup can go down to just publishing the nullifiers and utxos, and you can get the nullifiers down to 10 bytes as you don’t need collision resistance, so that’s 60 bytes (assuming an 80 bit security margin as you appear to be).

2 Likes

Besides:

  1. There are schemes where the recipient does not need to transact immediately (cutting the data cost again by half).
  2. @snjax, you are talking about Groth16 with an app-specific trusted setup, which is hard to consider a viable, practical and trustworthy solution in 2020. You need a universal PLONK at the very least, but eventually you want to move to fully transparent proof systems (STARKs, RedShift, Halo). In either scenario the compressed proof size is on the order of kilobytes—and must be posted on-chain by optimistic rollups.

Of course, you could aggregate the proofs for many transactions in the block without checking them on-chain, but this construction would not be an optimistic rollup—it would be a ZK rollup with optimistic verification: combining the worst of both worlds.

1 Like

You compare optimistic rollup with proving system, developed for zk rollup (in proving single transactions case) with zk zk rollups.

As you mentioned, the proof size will be some kilobytes. When for groth16 in optimistic rollup it’s 128 bytes per tx.

1 Like

Ok. Optimistic rollup is ~5x worse with Groth16, ~100x worse with anything without an app-specific trusted setup. We’ll let the reader make conclusions.

1 Like

Btw, is there any community approved methods to store off-chain data for private transactions (like encrypted utxo)? If both user and data storage forget the data, assets will be stuck. Also, if data storage forgets the data and the user could not send it directly to the receiver, assets will be stuck too.

1 Like

Wait, why 100x worse? PLONK proofs are only around 500 bytes, no?

1 Like

PLONK proofs are over 1kB compressed. But bulletproofs, STARKs, RedShift are a lot more. Thus order of hundreds.

1 Like

Halo 2 applied to multiple proofs that are made together, even without use of recursion, will have a marginal proof size on the order of 100-300 bytes (preliminary estimate): https://www.youtube.com/watch?v=pCSjN9__mtc&t=2100

2 Likes

This is really cool!

But my understanding is that verifying it in a single fraud proof is infeasible on Ethereum, even if arithmetic pre-compiles for EC cycles were available (which they aren’t). Am I wrong?

Even if we assume it would somehow magically be feasible, Halo transactions in ORs would still be at least (200*2+60)/40 ~ 10x more expensive than in ZKRs.

1 Like

If verification of a proof takes >10m gas, you can always use truebit-style multi-round techniques to verify it, I suppose. Though intuitively I dislike multi-round techniques because they mean that an attacker willing to get slashed can stall for time (during which only full nodes can ignore the attack, light nodes can’t), whereas in a single-round fraud proof system attacks can be caught and reverted immediately.

7 Likes

@vbuterin can you expand on the benefits of the ETH burn and next-block inclusion for rollups? Is the latter a UX-nice-to-have or is there something more there? Thanks!

1 Like

One thing I am curious about is the inevitable state growth of the L2s and its impact on users, whether dapp developers, sysadmins, auditors, and other users interested in the txn history. We often think of a user as an operator, but we need to put more focus on the “back office”.

Does anyone have any comments for this issue specifically? Perhaps it requires an entirely new thread? @tjayrush

These various kinds of L2s – ORUs, ZK Rollups, Plasmas, even state channels – can coordinate on standards for archive nodes, querying, exporting, etc. This problem domain is already very challenging for mainnet.

IMO when users can’t easily run a full archive node themselves it invites centralization and surveillance of the users of data, and even this unideal situation is stabilized by the constraints of Eth1. Imagine what happens when L2s with composability leads to 10X or 100X more data per day.

4 Likes

@timbeiko next-block inclusion is important for rollups because they rely on data being posted to the main-chain for finality and to progress the rollup side chain. Therefore it’s important that rollup aggregators can reliably post their transactions in the next block and not be stuck with a pending transaction as that would essentially delay the entire rollup chain from updating its state.

3 Likes

Talk on this by Vitalik is now available: https://youtu.be/r0jtV9mxdI0?t=52

1 Like

Maybe phase 1.5 could consider more trustless mempool service with quadratic demand modeling. This would better align the interest of client networks vs whales by orchestrating throughput less influenced by MEV, now more desireable for L1.

Miner fee-based compensation is similar to the square area of a funding pool, and the quadratic funding mechanism might be useful to better coordinate mempool service. The first step however is challenging since the concept of “one vote”, which in this case is a request for network service, requires some attention.

To address that, the value of transaction fees can be denominated in terms of a fair average (updated periodically) to form a basis for proportional mempool service according to the square root of that value.

For example, if an average transaction fee is deemed $2 for some window of time, a $2 fee is represented with a weight of 1 ($2 divided by $2). If there is a $100 fee opportunity pending, it would scale to 50 ($100 divided by $2). Then the square root of the weights would reveal how resources can serve the pool proportionally. The square root of 50 (the $100 opportunity) is about 7 with respect to fees in the $2 range with weights and roots near 1, so proportionally seven $2 transactions would be processed for each $100 transaction.

1 Like

I have been called a rollup shiller before, but I am bearish on this roll-up-centric roadmap and in favor of keeping the original plan of general-compute shards (ETH2 Phase 2), for a couple of reasons:

(1) validity proof of general computation are already here, see Cairo, Zinc and Noir by StarkWare, Matter Labs and Azteck respectively. So, it is not inconceivable that the runtime of shards in Phase 2 are provable in zk. Yes rollbacks of ORUs are localized to the offending contracts and not the network as a whole, but ORUs are a short-term solution anyway and may not even be universally viable because of capital efficiency vs security problem. So there seems to be a pre-mature optimization here.

(2) by reducing itself to merely data-availability and proof-enforcing layer, Ethereum risks losing its hard-earned network effects. These effects “grow” out of webs of interdependencies between L1 contracts … and by removing general compute, those webs start growing as inter-L2 dependencies. But then, rollup #1 having can keep its dependency on rollup #2 without necessarily being dependent on L1 for security. It just runs a light-client for ETH2.0 and outsource its own security to some other chain. This is different from when L2 rely heavily on logic on L1 to function. We should push to thicken the amount of logic on L1, not reduce it.

(3) if Ethereum takes over the world, there are many use cases that require nation-state-grade and enterprise-grade security and availability and may not tolerate the possibility of some withdraw delay due to an uncooperative ZKRU chain or a detected fraud in an ORU. Example: end-of-day inter-bank settlement logic … these people are gonna want to do this directly on L1.

(4) Having a proof-powered state- and data-shared ETH2.0 should improve scalability massively because the beacon chain is verifying proofs of proofs (whether fraud of validity) … so I don’t see why the scalability goes down (?). Unless Im missing something here.


TLDR: Rollback risks are not existential. Validity proofs of general compute are here and prover markets and ASICs will pop-up if necessary. Ethereum network effects are at risk if shards are not general-compute.

If we can make EVM execution zk-provable, and the ethereum ecosystem ends up preferring one particular zk-provable system, I’d be totally fine with considering enshrining it as the core execution model at some point [EDIT: see caveats below]. Though I would still favor delaying any such “phase 2” until such zk-proofing is actually possible.

The good news is that the rollup-centric roadmap doesn’t remove our ability to take this path; it’s much easier to add new functionalities to the ethereum base environment in the future than to take them away.

I would argue the opposite is easier: generalize by cleanly decoupling state transition from consensus, both at the shard and beacon chain, and then swap the state transition function with a zk-provable one when ready.

So plan for generality, settle for specificity if need be.

IIUC in the rollup-centric world the beacon chain will be hardcoded to enforce proofs of a data-avaiability-purpose-built VM. While in the generalized case the state function is itself an argument to the beacon chain so it can be swapped out anytime … hence the need for the plumbing of generality to be built out from the get go.

IIUC in the rollup-centric world the beacon chain will be hardcoded to enforce proofs of a data-avaiability-purpose-built VM

How so? The beacon chain would just be exactly the same EVM as it is today. So it could execute fraud proofs (or validity proofs) of any type of rollup that can be implemented in a contract.

I have another objection to the rollup world which comes from our stateless roadmap. This is because rollups typically come without witnesses; while a rollup with witnesses is possible, it will probably not be implemented because from the point of view of the rollup and implementers, witnesses will feel like pure overhead with no benefit, so why add them?

As long as rollup sequencers and validators stay separate, that is probably fine. However, in the long run, this system will feel very inefficient: We want to use the massive capital invested in layer 1 to also secure layer 2. In fact, as I have argued before, I think pure rollups offer terrible UX, and this can only be fixed by adding stake capital to secure rollup states (insuring users against fraud proofs). If we want validators to do this, it will mean that validators will have to accept maintainting the rollup state in addition to their duties. Of course, this will be “optional” – but since it brings in additional returns, it will mean all validators not doing it will essentially be priced out (because they won’t have their costs covered from the now lower returns on “pure” staking).

TL;DR: Rollups will force stateful shards on uss through the backdoor. I think this is bad.

3 Likes