A simple L2 security and finalization roadmap

Special thanks to various EF, Optimism, Taiko, Flashbots, Surge and other researchers who helped form my thoughts on this.

Today, the state of L2 security and L2 finality guarantees is improving: we now have three rollups at Stage 1, we are on the cusp of rollups getting more blob space with Pectra and then even more with Fusaka, and we have more and more high-quality ZK-EVM options that would allow much shorter finality times. Where can we go from here?

1. More blobs

This has already been discussed in other places; a target of 6 for Pectra, and 72 for Fusaka in Q4 (or, alternatively, 12-24 in Fusaka in Q3, if it is followed up with rapid further increases) feels like it is adequate to L2s’ needs.

2. Pragmatic fast finality via 2-of-3 OP + ZK + TEE

I argue that the best short-term proof system architecture for EVM rollups to get to stage 2 is a 2-of-3 between optimistic, ZK and TEE-based provers. Specifically:

  • If a state root is approved by both a ZK prover and a TEE prover, then it is finalized immediately.
  • If a state root is approved by either the ZK prover or the TEE prover but not both, then it is finalized after 7 days only if the optimistic proof game also unambiguously favors the state root.
  • There is (optionally) a security council, which has the right to update the TEE prover logic with zero delay, and the ZK or optimistic prover logic after 30 days of delay.
  • Potentially, we can give the security council upgrade rights in other specific contexts, eg. if a proof system provably disagrees with itself, we could allow the security council to upgrade it instantly.

This specific architecture is designed to simultaneously satisfy three goals:

  1. Provide instant finality in the normal case
  2. Satisfy the core stage 2 criteria, particularly (i) the requirement that if the “trustless” proof systems work, then nothing “semi-trusted” (either TEE or security council) is able to override them, (ii) the 30-day upgrade delay
  3. Avoid short-term over-reliance on ZK. Today, ZK proof systems still have a high enough rate of bugs, and shared code, that it is very plausible that either (i) there is a bug in shared code that affects multiple proof systems, or (ii) an attacker finds and holds on to a bug in one proof system for long enough that they discover a bug in the other.

In fact, the above architecture is arguably the only way to do this. Specifically, if for simplicity, you want a 2-of-3 proof system architecture, and ZK, OP, TEE and SC (security council) are the four “proof system” options, then:

  • (1) implies zk + tee >= 2 (OP and SC are both too slow)
  • (2) implies tee + sc < 2 (non-trustless things cannot finalize on their own)
  • (3) implies zk < 2

zk = 1, tee = 1, op = 1 is the only solution to this system of constraints.

The risk that a ZK system and an OP system will both have bugs (that are found by the same party) is much lower than the same risk for two ZK systems, because ZK and OP are so fundamentally different. In fact, it’s acceptable for the OP system to be ZK-OP (ie. one-round fraud proof via a different ZK-EVM), because the risk that one ZK-EVM has a soundness failure while the other ZK-EVM has a completeness (ie. liveness failure) is much lower than the risk of two soundness failures.

This gets us to a pragmatic higher level of fast finality and security while getting us to the key stage 2 milestone of full trustlessness in the case where proof systems (OP and ZK) work correctly. This will reduce round-trip times for market makers to 1 hour or even much lower, allowing fees for intent-based cross-L2 bridging to be very low.

3. Work on aggregation layers

Realistically, we are already on a trajectory to get ZK-EVMs generating proofs in one slot. This is because this is necessary for L1 use of ZK-EVMs. In fact, a very strong version of this, where we get single-slot proofs even in the worst case, is necessary for L1 use. This also creates a pressure for the rapid discovery and removal for the main class of completeness bugs in ZK-EVM: situations where a block has too many instances of some particular type of ZK-unfriendly computation. The verified ZK-EVM effort will also work to reduce soundness bugs, allowing us to hopefully phase out TEEs and go full-trustless in a few years.

The thing where we are currently relatively behind is Ethereum-ecosystem-wide standardized proof aggregation layers. There should be a neutral ecosystem-wide mechanism by which a prover in any application that uses zero knowledge proof systems (L2s, privacy protocols and zkemail-like wallet recoveries are the most natural initial use cases) can submit their proof, and have one aggregator combine the proofs into a single aggregate proof. This allows N applications to pay the ~500,000 gas cost of proving once, instead of N times.

5 Likes

Can we please properly motivate this publicly?

The public motivations I have read so far follow a “we’ll increase the blob limit because we can” argument.
I’ve seen cases where people say that we have to keep blobs cheap because at their current price point they don’t have a moat. This was only said to me privately.

A lot of “increase the blob limit” motivations were also written in different periods (vibe wise). I feel like many of the motivations were written in anticipation of a major bull market where more DA capacity would have been necessary. But this has, so far, not turned out to be an accurate prediction.

Can we please create transparency around increasing the blob limit and officially post the data and our thinking here on why we’re increasing the blob count? E.g. Base has motivated the blob count increase by arguing that their users’ transaction costs are too high, which is just really unreasonable as their txs costs are consistently below 0.01 USD. Besides, Ethereum captures so little value from Base that they can probably even subsidize their users’ gas costs while still being profitable, can’t they?

Blobs do have a moat from what I know, but they may have more of a moat further down the line? What is the thinking here? What can you share the ideas with us? Or is this giving away too much strategic advantage? I’m almost feeling like some information is strategically retained by decision makers.

Consider, I’m not an ETH researcher with deep insights into the mechanisms and I also don’t talk to one every day. It took me many days to get up to speed on why some people here want to increase the blob count. In my naive logic more blob supply means again less fee accrual, which again makes me worried for the Ether that I hold and its price.

Please trust your intuition @vbuterin. After now roghly a decade in the blockchain space I learned that people think very differently and many (even technically capable people) just don’t have intuition for greater / more broad things.
But in the complex and chaotic world of social interaction / economics / game theory there are often no hard facts to proof that intuition. Other projects are free to explore different paths… and people are free to sell their ETH if they think ethereum is going towards a wrong direction and follows a misguided path.
But still let me try to put that intuition into words (even though after all that time I’m not very optimistic that people who don’t get it will do so because of it):

Internet native software - and especially open source software based blockchains born and bred there - live in a highly competitive environment. Network effects provide some stickiness but in the long run only those will survive and thrive that offer the maximum utility to their users (and thereby maybe at some point to the whole world). This is because everything can be copied. It’s all public. Code is just information and can travel from A to B by the speed of light. So to maintain a leading position you cannot rely on your first-mover advantage for long. Others will catch up, and improve what you didn’t improve or offer users what you don’t offer. So ethereum must use its momentum and resources to stay ahead. Keep offering the maximum utility among all available alternatives.
This maximum utility - in the context of blockchains - translates to the absulute minium possible transaction fees while maintaining additional crucial blockchain properties that offer utility - and set blockchains apart from just servers. These other utility properties are immuntability and censorship resistance.
With blockchains it is like every software developer now has a magic tool in their toolbox to just upload code to a chain and thereby making it practially immortal - available for everyone, everywhere. This is incredably powerful and ethereum is correct in making sure that these properties must be maintained. Still to maximize the usefulness of this - transaction fees must go down further. Lots of applications that are easiliy to imagine are still too expensive to run on a blockchain, even with current fee-levels. Now one can ask - which applications are these? There’s no evidence of them. Well I do see evidence but you have to look close. People questioning the need or usefulness of any decentralized mass scale application have an easy position (until now) because we do not see them yet… But of course don’t do yet. This is because a platform for them that has low enough fees and still immutability and censorship resistance is not available yet. There are certain ideas that only now slowly make sense to start working on as L2s are on the verge of getting cheaper and fully trustless.

Regarding the point of L2s being parasitic… that will sooner or later go away when some L2s will emerge that simply use ETH (e.g. via restaking) to run validators and sequencers.

So…

  1. L2s being ETH-friendly (or even run with ETH as their sole token) will have a competitive advantage as the base of ETH holders is bigger (and more enthusiastic and idealistic about using the tech) than any other token (especially still non-existant ones). It is not easy to widely distribute a token among believers (note that this will change if ethereum doesn’t compete strongly enough)
  2. Even if 1. weren’t true… there’s simply no alternative to keep maximizing utility by non-stop working on decreasing transation fees while maintaining immutability and censorship resistance because blockchains live in a highly competitive environment. Even if L2s would take 95% of the cake, these 5% a highly competitive ethereum will maintain will be more significant than an outdated and comparatively expensive ethereum that lost its network effect in a few decades from now.

Below are the fees that rollups currently pay to the ETH mainnet

These are already tiny amounts

And here is a typical Base transaction - it shows that Base pays a tiny propotion of total user fee to ETH mainnet

And here is the current dominance of Base

It is easy to predict evolution if blob space grows 10 times - essentially L1 fees which are already miniscule will drop to zero, and there will be a single rollup surviving, which is Base

It is already estimated that Base shaved 50B USD of Eth market cap.

Moreover, Base has no plans to decentralize, and there is no economic model for anyone even provide Base fraud proofs, since running a fraud-detecting system in parallel to Base will cost lots of computational power.

The motivation for 10x’ing blobs is simple. We know for a fact that there are lots of L2s, both currently live and in the works, that are ready to bring thousands of TPS online. This TPS can either come online in a format which is trustless (which requires both sufficient blob count and adoption of a practical stage 2 roadmap like what I wrote above), or it can come online in a format which is basically a separate L1 barely connected to Ethereum.

It’s far better for us to have the former.

If the concern is that L2s are not paying enough gas, then we should fix that by setting a minimum blob gasprice. That way L1 can get paid and at the same time L2s get certainty that a sufficient level of capacity exists to handle their needs, even if their application sees an unexpectedly high level of success.

Additionally, if we decide to greatly increase the ethereum L1 gas limit itself, at some point we will need to rely on ZK proofs and rely on blobs to store execution data, because nodes would not be able to fully download the entire execution data directly. So blob count increases are a necessary waypoint among any realistic ethereum scaling plan that goes above ~1k TPS.