As home staker, I totally and fully support this!
we are late but better starting now that never
As home staker, I totally and fully support this!
we are late but better starting now that never
Rollup-centric raodmap was a clear mistake from start, glad to see them changing on this
Is the only thing that isnāt making me lose hope
You cant describe it better, especially from an home staker point of view
forcing us to process blob to help killing our investment is not a good loyalty pratice
Hey Dankrad,
I completely agree with others ā this is potentially a fantastic step forward!
A few remarks:
I understand why youāre proposing a fixed formula ā it pushes progress and helps avoid the need for consensus every time the block limit is raised. That said, if you go this route, there needs to be a pre-assessment from client teams to ensure the schedule is feasible.
Raising the gas limit 100x roughly targets an end goal of around 1000 TPS.
For EVM implementations in C++ or Rust, this is achievable with relatively minor modifications.
The main bottleneck will be updating the full historical state, particularly for the few nodes that store the entire state. The issue is that current state updates are fully single-threaded and sequential, so adding a powerful multicore machine doesnāt help. At 1000 TPS, full-history nodes may not be able to keep up with nodes that only maintain the current state.
The simplest approach would be to shard the Merkle trie so it can be stored across multiple key-value database shards (e.g., LevelDB). The easiest method is to shard by the first bytes of the key.
You could then have, say, 256 shards, each potentially stored on separate SSDs or even separate virtual machines. With this kind of sharding, performance can scale nearly linearly with the number of shards.
However, this isnāt currently possible with the existing Merkle trie, as it canāt be represented as an aggregation of shard-level Merkle roots. This is a fundamental limitation ā but one that could be addressed with a relatively simple change to the Ethereum spec.
Then thereās the issue of speeding up the consensus layer. I believe reaching 1000 TPS will require a formalized consensus specification. The current spec has many gaps, leaving clients to make ad hoc decisions. The Ethereum Foundation should launch an incentivized testnet to uncover security vulnerabilities and consensus instabilities. When PoS was launched, independent security researchers had no real opportunity to experiment ā let alone receive bounties.
Like this attack
I remember submitting it to the client team and the Ethereum Foundation, but I never received a single responseānot even access to a testnet to try it out. Iām now considering building a testnet myself. In my opinion, the attack is entirely feasible and could potentially bring down the entire system.
If Ethereum L1 becomes 100xā1000x faster and can handle massive TPS, then whatās the long-term role or need for L2s? Will L2s still be useful, or will they be abandoned once L1 can handle everything?
Rollups are anyway either dying or turning into corporations like Base.
If Ethereum hits 1000 TPS, weāll finally have the bandwidth for real composable dapps. Definitely people that need decentralization wont use or care about rollups. People that need to trade memcoins will use Base. May be Binance will have a rollup like Base. Other rollups will die since they have no purpose.
ETH pushed users away for years with high gas fees and no real consumer focus. Gamers, retailāgone. Winning them back now is much harder than keeping them in the first place.
Hereās the real bottleneck: finality. To make ETH consumer-grade, finality needs to drop to a few seconds.. Otherwise gamers and consumers wont come back to the ETH.
Thatās a much tougher challenge than just cranking up TPS having how outdated Ethereumās consensus layer is compared to other blockchains.
Iām a big proponent of scaling Layer 1, but I see a few issues with simply raising the gas limit. We need to match capacity with actual demandājust increasing the gas limit wonāt boost TPS in the short term. Instead, our first priority should be to shorten block times, giving users a faster, more convenient way to transact and creating a better environment for asset trading.
For instance, we could:
1. Reduce block time to 11 seconds to start.
2. Then cut it by one second each month until we hit the optimal minimum that maintains chain stability.
3. Once weāve reached that stability cap, we can begin increasing the gas limitāor consider a hybrid approach combining both strategies.
This staged plan helps ensure reliability while gradually improving throughput.
L2s will be left to offer different use cases. Eg anonymity or realtime transactions etc
Verifiability (it is very easy to check that the current continuation of the blockchain is according to the rules)
Censorship resistance (we can guarantee that any paying transaction will get included)
Left out a critical pillar, which is the blocker to this: Users MUST be able to read state that matters to them without relying on third parties.
L2s like all ethereum dapps stop being dapps if users (people that run non-block producing nodes) cannot read state because it doesnāt fit in their SSD.
great, this is look great
One question I have regarding this proposal is how the exponential schedule interacts with state growth mitigation efforts currently in progress (e.g. statelessness, Portal Network, and potential state expiry mechanisms).
If gas limits increase predictably regardless of actual demand, there is a possibility that storage-heavy workloads become economically viable much earlier than expected. In that scenario the network might converge closer to worst-case state growth assumptions rather than average-case.
Would it make sense to tie the schedule not only to epochs but also to measurable client performance metrics (e.g. block processing time, state access latency, or propagation delays)?
That could preserve the predictability signal to application builders while still keeping the increase adaptive to real-world node performance.