EIP-7938: Exponential Gas Limit Increase via Default Client Voting Behavior

@kladkogex you’re going to love this wonderful post by @dankrad

By the way I DM’ed you. I’m looking for collaborators to work on an AAVE alternative.

Anyone interested please message me.

Thanks.

  1. We need to compete because if we don’t we will lose market share.
  2. Gas increases are needed because right now the burn rate is low and the tokenomics turning terrible. I’ve run validator nodes before, such small rewards are not worth it if the price of ETH is dropping like a rock. As it stands L2’s are not paying their fair share of fees hence the massive drop in burn rate. So anyone suggesting we “drop the discussion on fees” is misguided at best malicious at worst.
  3. ETH development timeline has been lagging and frustratingly slow. Users expect better of it.
  4. DeFi gravitates to ETH because it’s the safest chain. I don’t mind paying more in fees and neither do many like me, we don’t expect to be able to do million dollar transactions for mere cents. And yes I have personally done over $500 million of volume by hand as I used to be a Sybil airdrop farmer and have put in my 10k hours in this space as a power user. I also developed and utilized simple bots to help me with the task. I saw the ridiculously low fees on L2’s and realized it wasn’t sustainable but voices like mine haven’t been welcome in the hallowed halls of the ivory tower that is EF but that is hopefully changing now. Right @dankrad @vbuterin ?

Yes! And I like your handle. Indeed we should focus on layer 1. Without layer 1 we have nothing.

Today home stakers (soon home builders) can run on 2TB but many people recommend starting with 4TB.

This means that home stakers could be affected pretty soon?

Or will they need to prune their nodes constantly in order to not run out of storage? (increasing bad UX for NO). Also, this will impact in the minimum bandwidth necessary.

I’m agree on L1 being the economic center on Ethereum but this proposal feels that is leaving some aspects out of the table and seeing only one side of the moon.

If the goal is to get back to mainnet and drop the rollup-centric roadmap, then why keep pushing the intents framework / interoperability efforts? Let’s not start a new battle front inside our barracks.

Solving the L1/L2 liquidity fragmentation UI/UX will increase the MOAT for eth eco.

I agree with @dankrad on this. L2’s have misbehaved and have been subsidized too much. They’re cannibalizing layer 1. It’s time layer 1 takes over. Without layer 1 there’s nothing. We need to seize the moment. Layer 1 must dominate once more. Storage costs are going down any how.

@dankrad is correct. Fragmenting liquidity across so many L2’s is madness and a recipe for disaster!

The upper limit of 14TB/year I quoted was actual state, assuming you are running completely pruned with no overhead.

In reality it is quite unlikely that 100% of gas is spent on storage, but on the flip side there is also a significant amount of overhead beyond the raw state (like old state if you don’t prune instantly, the supporting data structures, indexes, etc.)

As home staker, I totally and fully support this!

we are late but better starting now that never

1 Like

Here! Here!

I support this too!

Rollup-centric raodmap was a clear mistake from start, glad to see them changing on this
Is the only thing that isn’t making me lose hope

1 Like

Precisely! It’s been a nightmare.

1 Like

You cant describe it better, especially from an home staker point of view

forcing us to process blob to help killing our investment is not a good loyalty pratice

Hey Dankrad,

I completely agree with others — this is potentially a fantastic step forward!

A few remarks:

I understand why you’re proposing a fixed formula — it pushes progress and helps avoid the need for consensus every time the block limit is raised. That said, if you go this route, there needs to be a pre-assessment from client teams to ensure the schedule is feasible.

Raising the gas limit 100x roughly targets an end goal of around 1000 TPS.

For EVM implementations in C++ or Rust, this is achievable with relatively minor modifications.

The main bottleneck will be updating the full historical state, particularly for the few nodes that store the entire state. The issue is that current state updates are fully single-threaded and sequential, so adding a powerful multicore machine doesn’t help. At 1000 TPS, full-history nodes may not be able to keep up with nodes that only maintain the current state.

The simplest approach would be to shard the Merkle trie so it can be stored across multiple key-value database shards (e.g., LevelDB). The easiest method is to shard by the first bytes of the key.
You could then have, say, 256 shards, each potentially stored on separate SSDs or even separate virtual machines. With this kind of sharding, performance can scale nearly linearly with the number of shards.

However, this isn’t currently possible with the existing Merkle trie, as it can’t be represented as an aggregation of shard-level Merkle roots. This is a fundamental limitation — but one that could be addressed with a relatively simple change to the Ethereum spec.

Then there’s the issue of speeding up the consensus layer. I believe reaching 1000 TPS will require a formalized consensus specification. The current spec has many gaps, leaving clients to make ad hoc decisions. The Ethereum Foundation should launch an incentivized testnet to uncover security vulnerabilities and consensus instabilities. When PoS was launched, independent security researchers had no real opportunity to experiment — let alone receive bounties.

Like this attack

I remember submitting it to the client team and the Ethereum Foundation, but I never received a single response—not even access to a testnet to try it out. I’m now considering building a testnet myself. In my opinion, the attack is entirely feasible and could potentially bring down the entire system.