Good news: since Prague, one can access the last 8191 block hashes using EIP-2935: Serve historical block hashes from state. That improves liveness a lot.
You that using “entropy slicing” instead of hash functions to aggregate randomness contributions prevents grinding attacks. I am not convinced by that. My mental model is that it doesn’t matter so much what aggregation function you use: the last block producer can see all the previous inputs and as a result they can keep trying values just as easily. It could be that I did not understand your proposal though, so feel free to expand on this.
I think there is an alternative scheme that is more natural and efficient: the execution layer block header includes a field called prevrandao, which already a highly bias resistant source of randomness. Why not have the following protocol:
- at block height
n
, a user registers a request. The system keeps track ofn
. - starting from block height
n+1
(and untiln+8192
), anyone can fulfill the request permissionlessly by submitting the preimage ofblockhash(n+1)
. The system extracts prevrandao and uses it as a random seed to fulfill the request.
This requires decoding RLP on chain, which is a little bit annoying and also will be broken occasionally by hard forks, but it requires witholding a block to grind 1 bit, and uses only Ethereum. lmk what you think.
(tagging @angrymouse just in case)
My mental model is that it doesn’t matter so much what aggregation function you use: the last block producer can see all the previous inputs and as a result they can keep trying values just as easily.
@bbjubjub
It’s indeed true, but you missed one important piece in the proposal: It’s future blocks that matter, not past. Obviously we can’t access future blockhashes right away, but we can have 2 stages: 1 being entering “randomness request”. Of course individual block producers can still try to grind blockhash, but due to slicing of 1 hash into 32 bytes, grinding would have to involve actually grinding 32 MSBs (most significant bits), so block would have to be withheld for significant time to achieve any meaningful improvement to the randomness (though validators indeed can achieve some marginal improvement, with improvement increasing the more they withhold a block).
But prevrandao idea is also very interesting and might be the better solution, especially combined with the slicing idea (slice prevrandao into 32 byte array to “diversify” MSBs throughout whole prevrandao)