A rollup-centric ethereum roadmap

I would argue the opposite is easier: generalize by cleanly decoupling state transition from consensus, both at the shard and beacon chain, and then swap the state transition function with a zk-provable one when ready.

So plan for generality, settle for specificity if need be.

IIUC in the rollup-centric world the beacon chain will be hardcoded to enforce proofs of a data-avaiability-purpose-built VM. While in the generalized case the state function is itself an argument to the beacon chain so it can be swapped out anytime … hence the need for the plumbing of generality to be built out from the get go.

IIUC in the rollup-centric world the beacon chain will be hardcoded to enforce proofs of a data-avaiability-purpose-built VM

How so? The beacon chain would just be exactly the same EVM as it is today. So it could execute fraud proofs (or validity proofs) of any type of rollup that can be implemented in a contract.

I have another objection to the rollup world which comes from our stateless roadmap. This is because rollups typically come without witnesses; while a rollup with witnesses is possible, it will probably not be implemented because from the point of view of the rollup and implementers, witnesses will feel like pure overhead with no benefit, so why add them?

As long as rollup sequencers and validators stay separate, that is probably fine. However, in the long run, this system will feel very inefficient: We want to use the massive capital invested in layer 1 to also secure layer 2. In fact, as I have argued before, I think pure rollups offer terrible UX, and this can only be fixed by adding stake capital to secure rollup states (insuring users against fraud proofs). If we want validators to do this, it will mean that validators will have to accept maintainting the rollup state in addition to their duties. Of course, this will be “optional” – but since it brings in additional returns, it will mean all validators not doing it will essentially be priced out (because they won’t have their costs covered from the now lower returns on “pure” staking).

TL;DR: Rollups will force stateful shards on uss through the backdoor. I think this is bad.

3 Likes

Which of the following worlds do you think is long-term best?

  1. Proposers select which shard they are on and specialize in that shard (in this case it’s okay if the proposers need to be stateful as long as there are zkps for the rest of the network, which it seems like we are assuming we will have)
  2. Proposers are forced to rotate between shards, but they get block contents from a third class of actor (relayers?) that is stateful
  3. Proposers are forced to rotate between shards, proposers are stateless, and it’s users who need to be stateful

I feel like a zkrollup is totally forward-compatible with any of these three strategies, no?

1 Like

A zkrollup does indeed solve my concerns. But I am far from convinced that general computation zk rollups are coming as fast as people wish.

I believe currently GP computation CIP (Computational Integrity Proofs) are about 10^9-10^12 times slower than actually doing the computation. So I believe that for the foreseeable future it will be much more efficient to send a committee of 1000 to check the correctness of a computation, vs. building a super powerful sequencer that proves correctness to everyone. This will change if that factor comes down a lot (I would doubt it will ever be less than 10^6, much less 10^3), or the cost itself being negligible (such that other costs like latency (for the user) and bandwidth dominate).

I think GP zkrollups for a realistic EVM alternative are further out than some people think.

3 Likes

How about the implication of rollup to composability? Looks like there is a few discussion, and to me, it seems that the composability between rollups is even more difficult than that between sharding.

1 Like

Gas fees can be reduced easily by January.
Change Ethash algo, take ASICs off the network and GPU miners will agree to ETH issuance reduction equal to increase in rewards after 30 days stability in hash.

But I am far from convinced that general computation zk rollups are coming as fast as people wish.

If general-purpose ZK rollups are still far away, then that nullifies reason 1 that originally contributed to kicking off this part of the discussion:

(1) validity proof of general computation are already here, see Cairo, Zinc and Noir by StarkWare, Matter Labs and Azteck respectively. So, it is not inconceivable that the runtime of shards in Phase 2 are provable in zk.

If we can’t have ZK rollups for general computation, then that does mean that the optimistic and ZK families will both have durable value for quite some time, implying that there is not a single architecture that’s optimal for all cases, so giving users choice between the rich-but-optimistic environment and the limited-but-instantly-zk-proven environment (by having both kinds of rollups) is the best thing we can do…

1 Like

You mentioned in the ETH Online talk that such enshrining would only be necessary if the winning rollup abuses its position and behaves in unfair way towards the community. I totally understand this motivation (and agree with it), but it’s very important to draw the red line very clearly when such things are mentioned. Because, obviously, a “nationalization” like this would be quite a harsh move. Alone a threat of it could undermine the idea that Ethereum is a nation where property rights are respected. Imagine, for example, that Uniswap decides to introduces fees, and since it’s a protocol with “systemic importance”, community deems UNI as “too extractive” and nationalizes it…

What would be a fair behavior of a protocol that wins a lot of popularity on Ethereum?

2 Likes

You mentioned in the ETH Online talk that such enshrining would only be necessary if the winning rollup abuses its position and behaves in unfair way towards the community. I totally understand this motivation (and agree with it), but it’s very important to draw the red line very clearly when such things are mentioned. Because, obviously, a “nationalization” like this would be quite a harsh move. Alone a threat of it could undermine the idea that Ethereum is a nation where property rights are respected. Imagine, for example, that Uniswap decides to introduces fees, and since it’s a protocol with “systemic importance”, community deems UNI as “too extractive” and nationalizes it…

Agree there’s a lot of things to be careful about here, and thank you for bringing this up explicitly. It’s better to discuss such things earlier rather than later, when tens of millions of dollars will have been invested.

I’ll walk back on being “totally fine with enshrining it”; I was too quick to make that statement without thinking through the whole picture and not just the narrow technical considerations.

First of all, to be super clear I would definitely oppose “state-intervention forks” (ie. DAO-style forks) that just grab the state root of a rollup and import it into another system, stripping the token out in the process. A state-intervention fork is the only way to truly cleanly move everyone over from an L2 to an L1 “by default”, so it seems to be off the table by existing norms.

That said, there are other things that in theory could be done. One possibility is the ethereum protocol forking existing code to create a native execution capability, and inviting users to voluntarily move into that system. This would not be a violation of immutability or property rights, but it would be a gross violation of open-source politeness norms. And if the ethereum community commits not to do such a direct fork except in case of some kind of malicious exploitation of monopoly power, that could significantly boost L2 projects’ confidence in building on the ecosystem. I am inclined to also support such a commitment.

The truly tough thing though is dealing with all of the less clear possibilities. At the very least, if ethereum makes a native sharded execution capability, that would compete with all the L2s that have been made until that point, and it would have an unfair advantage. And the concept of doing that versus forking an existing protocol is not a binary, it is a spectrum. For example, if ethereum makes a native L2 execution capability using SNARKs or STARKs, that will doubtlessly use at least some open-source research and software packages that were originally built at least in part with L2s in mind.

There’s a limit to how strong a commitment we can make, because there’s also the possibility that we learn something new in 2-4 years that makes an ethereum-native sharded execution layer a really good idea and crucial to the ongoing success of the project. If we extend from 1 execution shard to 8 execution shards, where the new three execution shards have some different non-EVM language that’s designed to be ZK-provable, but where the goal is not to compete with ZK rollups, is that a problem? I think the best we can promise is to be fair to existing L2 projects and not do things that intuitively feel like pulling the rug out from under them.

A final possibility is some kind of “coin merger” with L2 projects (with their teams and token holders’ consent), but this risks being too controversial because it interferes with the goal of ETH monetary neutrality.

4 Likes

@vbuterin could you elaborate further why there is this reduction in TPS?


I think there is broad consensus on this. What is being questioned is why having 64 shards is better or worse long-term. What I argue is that rollups committing to 64 shards >= rollups committing to 1-4 data-availability-first shards for reasons mentioned above. I use “>=” to signify that in the worst case where general-compute shards were built but not used, the goals of the rollup-centric path are still fully realized, while the inverse is not true … if it turned out that we need build general-compute shards (say, centralization concerns or rogue rollups), then the costs will be much higher. We see this with technical debt in Eth1.x.

Once phase 1 comes along and rollups move to eth2 sharded chains for their data storage, we go up to a theoretical max of ~100000 TPS.

Is there any detailed document on how to use Phase 1 for Rollup? Since Eth2 Phase 1 does not have “transactions” as in Eth2, I’m assuming that:

  • a Rollup operator must run Eth2 validator(s), and
  • either one of the Rollup operator’s validators must be elected as a shard block proposer and create a shard block by themselves to put the Rollup transactions on Eth2 shards.

I have some questions about this.

First, the Rollup operator needs to run many validators (i.e., stake a lot) to reduce the latency offinality in the Rollup? In the worst case, if the Rollup operator runs only one validator, and the shard committee size is 2048 (maximum in the spec), the operator has an opportunity to commit Rollup transactions once in 2048 slots (~ 6.8 hours) in expectation.

Second, if most of the Eth2 validators are not Rollup operators, we cannot make full use of the data capacity of Eth2? Since running Eth2 validator and Rollup operator are different things in terms of responsibilities and economics, I assume most of (or at least some portion of) Eth2 validators are not interested in Rollups.

Or, is there any plan to introduce (in-protocol or off-chain) “transactions” for users other than validators to put data on Eth2 shards?

2 Likes

Or, is there any plan to introduce (in-protocol or off-chain) “transactions” for users other than validators to put data on Eth2 shards?

There is a plan to support the fee market for users to request Eth2 validators to put their data in Phase 1. Therefore, the above concern might not be a problem. (Thanks @djrtwo!)*

@vbuterin could you elaborate further why there is this reduction in TPS?

A shard as defined today needs to have its TPS capped at ~10-50 just to ensure that state sizes remain reasonable, a node can verify incoming blocks and maintain an up-to-date state, etc.

If we have a ZK-proven VM at the base layer, then potentially we could have much higher TPS.

1 Like

@vbuterin is there any effort to ensure the integration of L2 in a form of rollups requires minimum changes in client applications that are currently only using ETH JSON RPC standard to send transactions or read blockchain state and web3 to generate signatures?

That can be a huge problem to tell every client that they would need to adapt a new data format when they read from L2.

1 Like

This is very important. Most ppl are used to having a bank to guide them and take responsibility for security issues, including social engineering ones. And, when they are scammed, banks go and revert the transaction or at least refund them.

It’s very hard for many ppl to be responsible for everything they do. They have trouble to understand in example the difference of a simple transfer to a token swap.

Many wallets and online reports are hard to view tokens balances. Ppl mix up the wallet not discovering a token and just not showing its balance with actual 0 balance.

I can imagine how hard it will be to figure the difference between the balance on L1 and on each L2 instance.

But my biggest concern with L2 is if we’re able to choose to make a tx on L1 or L2 or if we’re forced to use L2. If we need to pay fee to deposit and then another to withdraw, L2 is a no-go for long term investors who just wanna buy/deposit/loan a token and hold it for some years.

To do that on L1, it’s required only 1 tx. If L2 requires 2 L1 tx (deposit then withdraw), then we’re paying double gas.

1 Like

Hey all,

We are going to host Ethereum L2 Future virtual event sessions starting tomorrow Friday 13th at 3pm UTC :sparkles:

RSVP to the first one starting tomorrow.

More info regarding the event sessions: L2 Future Session 1: "Starting with L2s" - Intro, review of solutions, mapping out needs

I think this will be a little bit hard in an async network.

I actually have some concerns.

Data across rollups are somehow isolated and won’t be accessible on chain until users exit (and users exits are intuitively infrequent). This usually won’t be a big issue in terms of normal transfers or smart contracts making use of L2 (for example, https://zksync.curve.fi/).

However, this may lead to

  1. inconvenience to make use of prtocols compositability and build DeFi lego.
  2. less instant verifiability of oracles if some data source from rollups.
  3. semi-centralized rollup-based DEXs – liquidity tends to gather together.

hey Vbuterin this post and the whole other section is a gold mine, event though im a beginner…
I can sense the wisdom in those words… Thank You … I’ll come back later after i’ve upgraded my knowledge on Ethereum Blockchain.