there is an opportunity cost of the funds being locked on CL until they are withdrawn.
An attacker could just obtain validators with indices spread out in 100k index increments, and do the deposit spam to the one that gets the withdrawal sweep next. That could reduce the lockup delay to <1 day. Or, only spam during the last day before the sweep hits them. Current mainnet could be filled up in ~3 years, it’s a lot but not impossible. We’d also see it and could set up a second deposit contract to take over once the first one is full. Agree that this griefing attack would probably not be too interesting, but I’d just like to see a computation on the actual number here – is it still around a year to fill up, or are we reaching months territory?
- The order of deposit processing is not enforced
index
field is basically required for the transition period.
These two are related, right? If it turns out to be a problem for decentralized staking pools that rely on the withdrawal credentials being locked by the deposit with the lowest index, the design may have to change to something that processes deposits in order. In that case, index
field may return to being not required, right?
Or, would it still be required as EIP-7685 documents “Within the same type, order is not defined”? For my thoughts on EIP-7685, please also refer to EIP-7251: Increase the MAX_EFFECTIVE_BALANCE - #8 by etan-status and EIP-7685 General purpose execution layer requests - #11 by etan-status – Without EIP-7685, could just replicate the SSZ structures, as in, just put the DepositReceipt
as is into both the EL block header and CL ExecutionPayloadHeader and the problems should be gone. We can discuss at next ACDE: Execution Layer Meeting 187 · Issue #1029 · ethereum/pm · GitHub
- Do you suggest to pass
deposit_count
alongside to each deposit? or to a bunch of deposits?
It’s a detail and doesn’t matter deeply. For smart contract verifiers, having an index per deposit is probably easier to check. Withdrawal
currently have a per-withdrawal index on chain as well. Overall, I’d recommend the approach most consistent with existing design, as in, replicate the consensus data structure as closely as possible.