The upper limit of 14TB/year I quoted was actual state, assuming you are running completely pruned with no overhead.
In reality it is quite unlikely that 100% of gas is spent on storage, but on the flip side there is also a significant amount of overhead beyond the raw state (like old state if you don’t prune instantly, the supporting data structures, indexes, etc.)
I completely agree with others — this is potentially a fantastic step forward!
A few remarks:
I understand why you’re proposing a fixed formula — it pushes progress and helps avoid the need for consensus every time the block limit is raised. That said, if you go this route, there needs to be a pre-assessment from client teams to ensure the schedule is feasible.
Raising the gas limit 100x roughly targets an end goal of around 1000 TPS.
For EVM implementations in C++ or Rust, this is achievable with relatively minor modifications.
The main bottleneck will be updating the full historical state, particularly for the few nodes that store the entire state. The issue is that current state updates are fully single-threaded and sequential, so adding a powerful multicore machine doesn’t help. At 1000 TPS, full-history nodes may not be able to keep up with nodes that only maintain the current state.
The simplest approach would be to shard the Merkle trie so it can be stored across multiple key-value database shards (e.g., LevelDB). The easiest method is to shard by the first bytes of the key.
You could then have, say, 256 shards, each potentially stored on separate SSDs or even separate virtual machines. With this kind of sharding, performance can scale nearly linearly with the number of shards.
However, this isn’t currently possible with the existing Merkle trie, as it can’t be represented as an aggregation of shard-level Merkle roots. This is a fundamental limitation — but one that could be addressed with a relatively simple change to the Ethereum spec.
Then there’s the issue of speeding up the consensus layer. I believe reaching 1000 TPS will require a formalized consensus specification. The current spec has many gaps, leaving clients to make ad hoc decisions. The Ethereum Foundation should launch an incentivized testnet to uncover security vulnerabilities and consensus instabilities. When PoS was launched, independent security researchers had no real opportunity to experiment — let alone receive bounties.
Like this attack
I remember submitting it to the client team and the Ethereum Foundation, but I never received a single response—not even access to a testnet to try it out. I’m now considering building a testnet myself. In my opinion, the attack is entirely feasible and could potentially bring down the entire system.
If Ethereum L1 becomes 100x–1000x faster and can handle massive TPS, then what’s the long-term role or need for L2s? Will L2s still be useful, or will they be abandoned once L1 can handle everything?
Rollups are anyway either dying or turning into corporations like Base.
If Ethereum hits 1000 TPS, we’ll finally have the bandwidth for real composable dapps. Definitely people that need decentralization wont use or care about rollups. People that need to trade memcoins will use Base. May be Binance will have a rollup like Base. Other rollups will die since they have no purpose.
ETH pushed users away for years with high gas fees and no real consumer focus. Gamers, retail—gone. Winning them back now is much harder than keeping them in the first place.
Here’s the real bottleneck: finality. To make ETH consumer-grade, finality needs to drop to a few seconds.. Otherwise gamers and consumers wont come back to the ETH.
That’s a much tougher challenge than just cranking up TPS having how outdated Ethereum’s consensus layer is compared to other blockchains.
I’m a big proponent of scaling Layer 1, but I see a few issues with simply raising the gas limit. We need to match capacity with actual demand—just increasing the gas limit won’t boost TPS in the short term. Instead, our first priority should be to shorten block times, giving users a faster, more convenient way to transact and creating a better environment for asset trading.
For instance, we could:
1. Reduce block time to 11 seconds to start.
2. Then cut it by one second each month until we hit the optimal minimum that maintains chain stability.
3. Once we’ve reached that stability cap, we can begin increasing the gas limit—or consider a hybrid approach combining both strategies.
This staged plan helps ensure reliability while gradually improving throughput.