@kladkogex you’re going to love this wonderful post by @dankrad
By the way I DM’ed you. I’m looking for collaborators to work on an AAVE alternative.
Anyone interested please message me.
Thanks.
@kladkogex you’re going to love this wonderful post by @dankrad
By the way I DM’ed you. I’m looking for collaborators to work on an AAVE alternative.
Anyone interested please message me.
Thanks.
Yes! And I like your handle. Indeed we should focus on layer 1. Without layer 1 we have nothing.
Today home stakers (soon home builders) can run on 2TB but many people recommend starting with 4TB.
This means that home stakers could be affected pretty soon?
Or will they need to prune their nodes constantly in order to not run out of storage? (increasing bad UX for NO). Also, this will impact in the minimum bandwidth necessary.
I’m agree on L1 being the economic center on Ethereum but this proposal feels that is leaving some aspects out of the table and seeing only one side of the moon.
If the goal is to get back to mainnet and drop the rollup-centric roadmap, then why keep pushing the intents framework / interoperability efforts? Let’s not start a new battle front inside our barracks.
Solving the L1/L2 liquidity fragmentation UI/UX will increase the MOAT for eth eco.
I agree with @dankrad on this. L2’s have misbehaved and have been subsidized too much. They’re cannibalizing layer 1. It’s time layer 1 takes over. Without layer 1 there’s nothing. We need to seize the moment. Layer 1 must dominate once more. Storage costs are going down any how.
@dankrad is correct. Fragmenting liquidity across so many L2’s is madness and a recipe for disaster!
The upper limit of 14TB/year I quoted was actual state, assuming you are running completely pruned with no overhead.
In reality it is quite unlikely that 100% of gas is spent on storage, but on the flip side there is also a significant amount of overhead beyond the raw state (like old state if you don’t prune instantly, the supporting data structures, indexes, etc.)
As home staker, I totally and fully support this!
we are late but better starting now that never
Here! Here!
I support this too!
Rollup-centric raodmap was a clear mistake from start, glad to see them changing on this
Is the only thing that isn’t making me lose hope
Precisely! It’s been a nightmare.
You cant describe it better, especially from an home staker point of view
forcing us to process blob to help killing our investment is not a good loyalty pratice
Hey Dankrad,
I completely agree with others — this is potentially a fantastic step forward!
A few remarks:
I understand why you’re proposing a fixed formula — it pushes progress and helps avoid the need for consensus every time the block limit is raised. That said, if you go this route, there needs to be a pre-assessment from client teams to ensure the schedule is feasible.
Raising the gas limit 100x roughly targets an end goal of around 1000 TPS.
For EVM implementations in C++ or Rust, this is achievable with relatively minor modifications.
The main bottleneck will be updating the full historical state, particularly for the few nodes that store the entire state. The issue is that current state updates are fully single-threaded and sequential, so adding a powerful multicore machine doesn’t help. At 1000 TPS, full-history nodes may not be able to keep up with nodes that only maintain the current state.
The simplest approach would be to shard the Merkle trie so it can be stored across multiple key-value database shards (e.g., LevelDB). The easiest method is to shard by the first bytes of the key.
You could then have, say, 256 shards, each potentially stored on separate SSDs or even separate virtual machines. With this kind of sharding, performance can scale nearly linearly with the number of shards.
However, this isn’t currently possible with the existing Merkle trie, as it can’t be represented as an aggregation of shard-level Merkle roots. This is a fundamental limitation — but one that could be addressed with a relatively simple change to the Ethereum spec.
Then there’s the issue of speeding up the consensus layer. I believe reaching 1000 TPS will require a formalized consensus specification. The current spec has many gaps, leaving clients to make ad hoc decisions. The Ethereum Foundation should launch an incentivized testnet to uncover security vulnerabilities and consensus instabilities. When PoS was launched, independent security researchers had no real opportunity to experiment — let alone receive bounties.
Like this attack
I remember submitting it to the client team and the Ethereum Foundation, but I never received a single response—not even access to a testnet to try it out. I’m now considering building a testnet myself. In my opinion, the attack is entirely feasible and could potentially bring down the entire system.