Ethereum 1 dot X: a half-baked roadmap for mainnet improvements


But, a 2x-5x boost in throughput would make the current problems with mainnet 2x-5x worse. The biggest problem is growth in disk space, and if we’re going to boost the mainnet throughput then the disk space problem must be solved first.

I am not very familiar with the constraints that go into setting the gas price of each opcode, but if the state size growth is the main problem, couldn’t we try to limit the state growth to roughly what it is currently while increasing throughput? i.e.,

  1. Raise block gas limit by 2x
  2. Increase the cost of SSTORE when a value is set to non-zero from zero to 40,000 gas

I was also wondering if setting non-zero values to zero could be not only subsidized but rewarded, as a way to incentivize clearing unused storage. I saw there has been a proposal along these lines by @axic. Has there been more discussion on why this might not be viable?

@cdetrio thank you for that summary. Information like this - even as a ‘personal perspective’ - is invaluable to those with their own forward roadmap/s on projects that are in or around the Ethereum network. One of the most powerful elements of operating with transparency, is that it allows for a community with the ability to prepare for any and all ‘adjustments’ needed to their operations/executions/processes.

I would also like to add that publishing a review like this immediately after the meeting - as well as including the fact that there is/was closed, private working groups assigned to tasks - would have gone a long to allaying many of the transparency concerns that have been raised over the past few days.


A half-baked technical idea on storage. Can under-used storage be stored in fewer nodes, with a defined way for a node to find the pieces it is missing? So the less often storage is used the less space it takes and the more expensive it is to load.

Strongly support us giving the users something to date then in the mean time. Also don’t want to rush the research team to introduce something before it’s ready. As for the specific proposals, it’s unclear to me which ones to prioritize. But strongly support this direction.


How about keeping rent simple and maximally effective at the protocol level and completely deleting anything which runs out of funds. Leave it to user applications like mycrypto or mist manage safety and warn when sending to a deleted account to prevent replayed transactions, a service could exist to provide the accounts. When introducing rent give everything a year long buffer to make easy. It is reasonable to expect rent to be paid on anything which matters and for users not to reuse deleted accounts which have lost their nonce. Protocol shouldn’t be compromised to hold their hand, user apps can do it and eventually users will know anyway.

What’s friendly to users is actually getting it implemented ASAP and the simple method could give that, adding a suspended state and a good way to rehydrate them is very complex, creates room for dangerous bugs around something getting suspended multiple times and resumed at different points, and it’s likely UX will be entirely unfriendly around resuming suspended accounts anyway, I think it would actually almost never be used in reality.

Probability of achieving complex method of rent at a date sufficiently in advance of Serenity that it’s worth doing is very low. It also still leaves state growth to head towards infinity long term as nothing can be entirely deleted where as the simple method will actually work as desired.

How about keeping rent simple and maximally effective at the protocol level and completely deleting anything which runs out of funds.

I think this is a bad idea. Users forget about some application they are involved in all the time. Even in ENS auctions which lasted a few days, I remember there were people who forgot to reveal their bids. From a usability point of view, a recovery path for an account that gets hibernated, even if an expensive one, is IMO essential.

I raised this exact possibility in the meeting, and it still seems reasonable to me. The way opcode prices were originally made is using this spreadsheet that basically just calculated the different costs of processing each opcode (microseconds, history bytes, state bytes…) and assigned a gas cost to each unit of each cost; we can just push up the cost we assign to storage bytes.

It definitely solves the largest first-order problem (storage is not costly enough in an absolute sense) minimally disruptively, and I’m not sure if the other inefficiencies of storage pricing today are bad enough to be worth uprooting the present-day storage model live to fix.

A third possibility that I have not yet seen discussed is to start off by raising the gas limit and increasing the SSTORE cost (possibly greatly increasing it, eg. 4-5x; also NOT increasing refunds to mitigate gastoken), and then start architecting a precompile that manages a cheaper class of temporary storage that follows some rent scheme.


A much cheaper class of temporary storage would be great. It seems to me that in many applications which require storage but not permanent storage, we know how long temporary storage should be allocated for (e.g., and also many layer 2 designs), or at least a reasonable upper bound for it.

IMO providing incentives for clearing storage in a way that doesn’t also incentivize gastoken is impossible


I would also be cautious about introducing rent too quickly – it fundamentally changes the relationship between users and contracts (by making contracts shared resources instead of perpetual services) and could disrupt many projects’ business models. Increasing the SSTORE cost seems like the most reasonable backwards-compatible solution. Also:

  • I like the idea of a RAM-style intermediate storage between the stack and storage proper; as a new feature, we can introduce new constraints without throwing a wrench in everyone’s works.

  • IIRC from discussing with a colleague, the limit on refunds is there b/c there is no incentive to mine a tx which involves paying money to the caller.

1 Like

Although this has been answered, I ranted off in a separate thread.

TL;DR: The incentive mechanism, as it currently stands, seems mostly unusable. But there may be a way around it, if we change our wicked ways.

1 Like

I didn’t know GasToken was something to be mitigated. This thread proved helpful in outlining its potential long-term implications.

IMO any increases to SSTORE need to be done alongside corresponding increases to gas limits (e.g. 4x SSTORE increase => 4x gas limit increase) to avoid issues with some functions becoming uncallable due to gas limits and potentially locking funds etc…

Hello out there! I’m a writer for ETHNews and I’m trying to understand the state rent conversation, but coming up short on a lot of fronts.

A few weeks back, Vlad Zamfir posted on Twitter about how, even after sharding, that state size will be an issue. Why? He was saying that we need to impose limits on state size or else the VM will be f-ed. Why?

I get the idea of state rent insofar as it makes sense to me that you’d want to compensate people for storing data, but that doesnt seem to be what people are talking about. You’re talking about the state being too big, period. Is this just because it takes forever to sync?

And then, in the 1x call, state rent and state reduction were discussed separately. Why? Wouldnt state rent cease to be an issue if the chain were sufficiently pruned? (Maybe not, if we actually get some users. Then I guess we’d need both.)

I can also be reached at or @alberreman on telegram

1 Like

There were two separate discussions because state rent only applies to the active state (this comprises of all non-empty accounts and all contracts that have been created but not self-destructed, with their storage). What you call “state reduction” discussion was discussion about other bits of data that Ethereum clients are currently storing, sharing around, and providing to dApps

1 Like

Great write up, thanks! I’m trying to catch up with the current 1.x and 2.0 situation and this is a huge help, it’s all very interesting.

I 100% agree, punishing users with irreversible deletion of their permanent shit would not go over well.

Is there any more information on how and where this archival data will exist? I’d be interested to read the current consensus on it. I’m not a big fan of the idea that you’ll have to pull data stored in extravagantly large nodes that only a few people control.

There does seem to be a gap in the 1.x proposals regarding “the state that is stored somewhere”, and it does need to be clearly addressed. Perhaps nodes of this type could be called “evicted state archive”, and can be incentivized so that there can be more operators running them.

Myself, @tjayrush, @5chdn, and many others participating in the “Data Ring” could take a look at this and begin a discussion about possible incentivized nodes of this type.


Brilliant, I’ll look out for it! Thanks for the reply.

Clarifying about the 1.x “recoverability”, I see that this has been clearly proposed, just not in the “half baked roadmap” summary. IMO it still seems unclear how exactly a user recovers, and where exactly the data necessary to perform the operation would exist.

From a core devs gitter discussion today:

Recoverability is, imo, really elegantly solved. We iterated over several models, but the final one that is in @fjl’s gist is really really nice

Felix Lange / fiji proposed a RESTORETO opcode in his storage rent gist:

A key description of the mechanism is from Page 56 of @AlexeyAkhunov’s Ethereum state rent - rough proposal dated Nov 26, 2018:

When rent is not paid, contracts leave a “hash stump”, which can be used to restore the contract using opcode RESTORETO. This is different from semantics after Step 3, where linear cross-contract storage would be lost. At this step, linear cross-contract storage can also be recovered with RESTORETO.

@holiman’s TLDR description:

This scheme makes it possible to resurrect arbitrary size contracts, since you can spend infinite time on rebuilding the data-structure. Other types of resurrect, with proofs included in the transaction that does the restoration, has a practical limit on how much data you will be able to supply


I have gleaned more details from asking on the all core devs gitter channel.

The Q&A is summarized in Discussion about “eviction archive” nodes.


  • The restoring user must restore their state in a series of steps, calling the proposed RESTORETO opcode within a contract.
  • RESTORETO accepts 1. addr of the hash stump left on eviction, and 2. addr of a contract from which code is taken.
  • This user needs to have the evicted state data, or needs to get this data from some eviction archive service. This data is used in the contract.
  • RESTORETO is not burdensome on any nodes, but is burdensome on the user depending on the size of state being restored.
1 Like

There’s two different things here:

  1. A tweak to the syncing protocol so that clients only keep recent blocks/logs. When a client fast syncs, it will only download recent blocks. Historical blocks would remain available on say Bittorrent; historical blocks are only needed for clients to do a full sync (run all transactions from genesis, but prune historical account states from the db; these clients can only respond to RPC queries like eth_getBalance for the latest block), or archive sync (run all transactions from genesis, and keep historical account states in the db to quickly respond to RPC queries about account states at blocks from long ago). This is the “Chain Pruning” proposal: Technically it is not even a hard fork, nothing about the EVM changes, just the sync protocol.

  2. Adopting storage rent and evicting contract storage from the state. Here is a storage rent EIP with a RESTORETO opcode: If a contract gets evicted and a user wants to restore it, they need to pass some data to RESTORETO. They can fetch this data from an archive node (an ethereum node that can respond to RPC queries about historical account states, see above). This is a hard fork, it changes the EVM.

The data needed to restore evicted contracts exists in historical account states. The archive nodes that we have today can provide this data:
geth --syncmode full --gcmode archive
parity --no-warp --pruning archive