EIP-2935: Save historical block hashes in state

Can we please use an address close to the bottom of the address space? Anything smaller than 2*32 will be fine.

Before this opcode becomes available, the problem with storing a merkle root of blockhashes up to a certain block number, is that you canā€™t update the merkle root securely to get the hash of more recent blocks. To combat that, before this EIP is implemented, I wrote a fun little optimistic rollup (based on interactive fraud proofs) to figure out determine historical blockhashes optimistically. Itā€™s super experimental and not tested, but all of the elements are there :slight_smile:.

Are there any plans to implement this proposal please?

2 Likes

Just to mention that this EIP would be very helpfull for L2 bridging.

1 Like

Iā€™m trying to implement this proposal in geth after it appears that it is required for verkle trees.

This raised a couple of questions:

  1. Any thought on how to best adapt this to the timestamp-based forks that we now use?
  2. In particular, why is the activation at block.number > FORK_BLKNUM and not block.number >= FORK_BLKNUM ? It makes things difficult to handle when FORK_BLOCKNUM isnā€™t readily available.
  3. Given that there is a complete state overall at the boundary, could we simply insert all block hashes in the tree as well, thus ensuring that all historical blocks see their hashes available in the state?
  4. How are the costs of the BLOCKHASH instruction meant to evolve? For instance, in the case of verkle:
  • should only the witness gas costs be charged or not?
  • what extra costs that should be added besides the witness costs?

Clarifying point #2: in stateless mode, having block.time > FORK_BLKTIME forces me to get the parent to check if its timestamp is after the FORK_BLKTIME. Whereas if itā€™s block.time >= FORK_BLKTIME then all I need is to check if the current blockā€™s time is past the fork time.

1 Like
  1. if 3 then this is irrelevant
  2. I think it may have been an omission
  3. I think it is a good idea
  4. Witness cost and extra processing in case of > 256 (around 100?)

Iā€™m not a fan of storing the entire block hash history in storage, even if it is from a fixed point in time. Iā€™d prefer we adapt what was done for Beacon Roots in EIP-4788 and have a rolling storage set. However instead of TIMESTAMP we would need to use NUMBER and tune the buffer length down. From this we would ensure that at least the last 256 hashes are in storage and thatā€™s all we need for the opcode to work.

I think since it is a new thing anyway, one could store a ZK-friendly Merkle root of the current state and block transactions.

MOD 0x2000 is AND 0x1FFF. Update EIP-2935: replace MOD with AND by chfast Ā· Pull Request #8487 Ā· ethereum/EIPs Ā· GitHub

in the blocknumber > input + 8192 the ADD can overflow. Update EIP-2935: note possible ADD overflow by chfast Ā· Pull Request #8488 Ā· ethereum/EIPs Ā· GitHub

To double check, the list of block hashes here refer to EL block hashes, not CL blocks? That would make this work well across long gaps as well, i.e., on Goerli towards the end there were timestamp gaps between some blocks that exceed 8192 CL slots ā€“ as this EIP suggests to keep historical EL block hashes, having long stretches of empty slots shouldnā€™t matter.