EIP-4788: Beacon root in EVM

This EIP is useful for liquid staking protocols that want to prove new balance updates for their validators and account fault with the highest security. Most LSDs rely on a small oracle committee to supply them this sort of information. In addition, there are tons of new applications that will be built given the trustless access to more detailed protocol parameters. Seems like a win. What are implementation barriers for this sort of thing?

1 Like

there are many many many applications unlocked by this EIP

the implementation barriers are not really the blocking ones, the bigger issue is simply prioritizing this feature amongst all other things we want to do to make ethereum better :slight_smile:

Agreed! What’s the process of getting this prioritized?

im happy to help you argue for it on ACD sooner or later

i can say that it will be hard to prioritize against validator withdrawals and work around 4844

and currently ACD is very focused on a successful merge so we haven’t really done any serious Shanghai planning and I’m not sure now is the right time.

perhaps we revisit after the merge has occurred?

Totally. Appreciate it. Thanks for the responses, excited for this :slight_smile:

My understanding is that to be able to prove some value in the beacon state, we also need to know the generalised index of the property we wish to prove (or more generally the depth of the merkle tree representations of the objects in the path with which we can compute the generalised indices of properties).

It seems likely (and is already the case in Capella) that the consensus spec containers will be appended to in future forks and so the depth of their tree representation may increment and therefore all the generalised indices of the existing properties will change too.

The consequence of this is that any smart contract that verifies proofs will need some sort of upgradability to be forwards compatible with future changes to the beacon state object.

This seems to break some of the fundamental usefulness of having the state root available on the EVM because proof verification logic cannot be made immutable. Previously, we had to trust an oracle to submit the correct state about the beacon chain. Now, we have to trust the contract owner to upgrade the proof verification logic as required by future consensus spec changes.

Unless the proof verification logic was also made part of the EVM via a precompile or something.

Am I missing something?

3 Likes

This EIP requires changes to the consensus layer and engine API. Where should those be spec’ed? I recall EIP4844 having similar issues.

We’ve discussed an idea that seems to solve this issue on discord, so I will also write it down here for anyone interested.

Basically, if you assume that the data containers are only ever appended to, then the tree depth increasing is really just adding an extra “left” movement to the start of the path from the root to the leaf in question. For example, if the leaf element you are interested in is currently “right” then “left” from the root, your verifier smart contract would probably look like this:

function verify(bytes32[] proof, bytes32 root, bytes32 node) internal {
    node = keccak256(abi.encodePacked(node, proof[0]));
    node = keccak256(abi.encodePacked(proof[1], node));
    require(root == node);
}

The point we are making is that any time the tree depth increases from newly appended properties, the existing container properties will be in the left subtree, so the verifier could have just been implemented like this:

function verify(bytes32[] proof, bytes32 root, bytes32 node) internal {
    node = keccak256(abi.encodePacked(node, proof[0]));
    node = keccak256(abi.encodePacked(proof[1], node));
    for (uint256 i = 2; i < proof.length; ++i) {
        node = keccak256(abi.encodePacked(node, proof[i]));
    }
    require(root == node);
}

to allow longer proofs if they are needed in the future.

So is there some reason this is seemingly currently being done via a stateful (!) precompile rather than just a new opcode? A new opcode seems like it would fit better… it’s stateful, after all, and this seems to be just reading that state, not just doing computation? Everything about this has the characteristics of an opcode, not a precompile; it seems like the wrong mechanism is being used here. Is it to save on opcode space? There’s quite a lot of that still, so that hardly seems like a good reason to use this awkward, seemingly-incorrect mechanism.

2 Likes

we can always use the “last” opcode as a pointer to an extension table so in fact there is unlimited opcode address space

a big motivator for the stateful precompile approach is facilitating the migration to a stateless world w/ Verkle tries and stateless clients

having everything required for protocol execution live w/in the execution state means the state transition function can be a function of the pre state and next block, to get the post state; which is cleaner for stateful clients and makes proofs for stateless clients easier to manage

a close analog to the functionality provided in this EIP is the BLOCKHASH opcode which just summons some history alongside the execution state – so now to validate an ethereum block you don’t need just the state but also this small buffer on the side of the last 256 hashes; this makes the stateless paradigm a bit more awkward so its better to design in the direction of 4788

there are even EIPs floating around to change BLOCKHASH so that it follows the pattern of 4788

I have some questions regarding this EIP.

set 32 bytes of the execution block header after the last header field as of FORK_TIMESTAMP to the 32 byte hash tree root of the parent beacon block

I am not sure how to interpret this. Does this imply that we add a new field to the block header after the last header field (such that we RLP encode this)? Or, do we first RLP-encode the block header and then add these 32 bytes…?

The new precompile (which it definitely is, since it does do some EVM behavior like SLOAD, but will not add these to warm slots to account for them) is located at 0xfffffffffffffffffffffffffffffffffffffffd. Why here? Why not in the precompile range 0x00..00 - 0x00..00ffff (see EIP-1352: Specify restricted address range for precompiles/system contracts, it is stagnant, but I was under the impression that those “low” addresses were indeed reserved for precompiles). I am mainly worried about RIPEMD160 scenarios (Clarification about when touchedness is reverted during state clearance · Issue #716 · ethereum/EIPs · GitHub). We can fix this on mainnet by sending 1 wei to this precompile (currently it has no eth). However since usually genesis files on testnets fill the precompiles in order to avoid this RIPEMD160 behavior, this 0xfffffffffffffffffffffffffffffffffffffffd address is not there. Is there a motivation to put this there, and not at a “low” address?

I’m not sure I follow how using a stateful precompile makes that any better (the state still has to be stored somewhere!), but good to at least know there’s some non-arbitrary reason for it.

there are even EIPs floating around to change BLOCKHASH so that it follows the pattern of 4788

Got a link?

I assumed the odd high address was to mark it as a stateful precompile rather than an ordinary one. But meanwhile I’m wondering, why -3 of all things, when (assuming people are setting aside high addresses in this way) -1 and -2 have yet to be taken?

Got a link?

https://eips.ethereum.org/EIPS/eip-2935

the idea is that everything is under one object, namely the execution state, rather than the execution state and some additional context (like it is today for block hash)

the former

the intention is to add a header field – I wrote it this way so it is independent of any other header fields as even in the last few weeks the header fields for EIP-4844 have been changing

it would be unexpected (and so we should assume this was not the intent) to do a completely new thing where we encode the header and then tack this extra data on

we decided on ACDE 163 to move this precompile to the low address range and I’ll be updating the EIP today

I see, thank you! That also seemingly explains why -2 wasn’t used… -2 was taken by this other proposed stateful precompile. :slight_smile: (And I can imagine that -1 wasn’t used for other reasons.) Although going by below sounds like these are being moved to low addresses regardless.

I do have to say I don’t really understand the distinction being drawn, though. One way or another, you need this list to be stored somewhere. I don’t understand why it makes a difference whether it is stored somewhere available to an opcode, or stored somewhere avialable to a precompile.

Like, you say “right now you need the execution state and some additional context”, but that statement depends on defining “execution state” in a particular way, right? I would have just said that, because this list is accessible, that makes it part of the execution state. Evidently “execution state” has some technical meaning here that excludes this sort of thing, but even granting that, why does it make a difference? Why is the line that is being drawn a useful line to draw, such that doing it the precompile way is easier than doing it the opcode way? To me it just seems like they both require storing this information – which I would have called state information, though I suppose it doesn’t fall under what is technically being called state – somewhere, and it’s not clear why opcode vs precompile makes a difference…

I’d say the opcode vs precompile question is a bit different than the state design question.

Evidently “execution state” has some technical meaning here that excludes this sort of thing

when we say “execution state”, we mean the thing committed to by the state root in each block header

but assuming the beacon roots live somewhere, there is an access question – do we have an opcode or just frame it as a precompile so we can just leverage the CALL infra? every opcode we add does introduce new semantics to the EVM virtual machine, whereas just CALLing a precompile is something done all the time so its less of a ask to just go via CALL

Should this precompile go OOG if the input length is not 32 bytes?

not sure – what do other precompiles do here?