Discussion topic for EIP-7892
Update Log
- 2025-02-28: Add EIP: Blob Parameter Only Hardforks
External Reviews
None as of 2025-02-28
Outstanding Issues
None as of 2025-02-28
Discussion topic for EIP-7892
None as of 2025-02-28
None as of 2025-02-28
- The
maxBlobsPerTx
field is optional and defaults to value ofmax
when omitted. It is not used by the consensus layer.
I think defaulting to āmaxā is not great. It was already a source of misunderstanding in the past, when people assumed it is 9 because max was 9. We should better avoid that same mistake. I would either keep it out of BPO scope, or make it mandatory.
Iād like to suggest that instead of the bpo*
naming convention, we go with something a bit more fun (and uniform with the glacier forks).
Here are a few options:
I like it.
2cents: Comets / asteroids are quite blobby and would be kind of close to āglaciersā (ice) of space? Aligns with the stars naming concept. I think asteroids have more common names but looks like there are plenty of comet common names to pick from (https://ssd.jpl.nasa.gov/tools/sbdb_query.html).
I wonder why only the ForkDigest is rolled on a BPO, without the ForkVersion.
The problem with only rolling the ForkDigest is that after the BPO activates, network traffic from clients that did not upgrade remains valid on the network partition of clients that upgraded. This essentially provides withholding-as-a-service, which was a big topic back when fork choice security was revisited.
Questions are:
Full hard forks require extensive coordination, testing, and implementation changes beyond parameter adjustments. For example, in Lighthouse, a typical hard fork implementation requires thousands of lines of boilerplate before any protocol changes occur. BPO forks streamline this process by avoiding the need for this boilerplate code.
The EIP rationale doesnāt make sense to me. The same coordination is required for a BPO: All users have to update clients before it activates, and the node consumes more resources after the fork, possibly requiring hardware upgrades; the experience for users is exactly the same as for a full blown fork.
Fork version could be similarly designed as a āparameter adjustmentā. One has to be careful though to choose globally (across all chains) unique (fork_version, genesis_validators_root) tuples to avoid slashings across chains.
@ethDreamer explains EIP-7892 for upgrading blob parameters without a full protocol hardfork.
When they were initially proposed, it was because forks for some clients(iirc) were āexpensiveā if weāre just updating one field.
In hindsight iād just go for a fork and take the overheads and address why its so hard in how weāre doing things, or iād push harder for not caring and setting a high maximum thats āsaneā.
The real ānuts and boltsā of these forks are in execution, and theyāre a real fork at that layer. In execution they need to address target etc, and weāre only interested in the maximum value for our single validation rule that comes into play.
Initially BPO didnāt adjust the fork digest and it was seen as a problem because weād have gossip that now is potentially ānot compatibleā on the same topics, so adjusting digest was a way of ensuring that our topics should have relatively low noise.
Is bpo safe for fork choice? Yes - weāre only validating gossip, if weāre wrong (too low) then we wont pass the gossip to our execution layer and weāll drop off chain, but weād also not see it because of the digest mismatch.
is BPO safe from a perspective of sync? I canāt see why itād change anything here, we can sync across fork boundary now.
Why not roll fork version? - we had the context we needed to make a unique digest which is why we changed compute_fork_digest
in the way we did. by rolling in epoch and blobs into the digest, its basically the same as changing fork version, in that it changes gossip topics at the right time.
Basically weāve implemented a fork, without the full backing of a fork, and it got complicated but was preferred by some CL (iirc). It has the same requirements for upgrade etc, and youād lose the network peers if you donāt upgrade - its a fork.
I canāt see why itād change anything here, we can sync across fork boundary now.
A regular fork boundary ensures that there is only a single gossip topic at any given time where valid data can be exchanged. This is different from BPO forks, where traffic from additional network partitions can also be valid, despite being withhold from the local view.
by rolling in epoch and blobs into the digest, its basically the same as changing fork version
No, it is not the same, as traffic from other network partitions remains valid, if only the forkdigest is bumped.
and youād lose the network peers if you donāt upgrade
The tricky part about the fork transition is that Ethereum PoS is based around 2/3 honest majority. If 1/3 donāt upgrade / are malicious, and gets the first couple slots after a fork (low probability but not practically impossible), thatās where the tricky situation may show up.
Misconfigured / malicious / buggy peer may re-broadcast data from one topic to another, maybe selectively publishing only partial blobs / blocks on the other network, but crucially being the only data source on the other network partition for a large chunk of data could impact peer scoring. Also, they can selectively reveal blocks to some of the validators to trick them into attesting to the other partition, without the data being widely circulated on the validatorās primary partition.
I donāt have the capacity to run a proper study here, to see the worst impact that a 1/3 minority in a somewhat privileged network partition may have here, as in, with the ability to trick honest members from the 2/3 majority to attest to their chain as well, possibly justifying it. I do recall the split view / justification withholding stuff from 2023. However, that was applicable more broadly, while my concerns here only apply to the initial 1-2 epochs of the fork.
When they were initially proposed, it was because forks for some clients(iirc) were āexpensiveā if weāre just updating one field.
If the shipped clients already support a future scheduled but not yet activated BPO configuration, one could just fast forward and apply that config from the getgoā¦
And otherwise, if the clients donāt yet support a future BPO config (but already schedule it), and further software updates are needed to support them, the fork is no longer ājust updating one fieldā but involves more sophisticated workā¦
In hindsight iād just go for a fork and take the overheads
Could still be done. Like, just donāt schedule any BPO with Fulu, and ship BPO1/2/3 as proper forks (possibly in the same release) with:
Ultimately, the focus should be on the potential security impact.
At the very least, the EIP should acknowledge the security discussion: EIP-7892: Blob Parameter Only Hardforks
Coming in during last call to make a tiny meaningless change: Give Blob Parameter Only Hardforks more human names by SamWilsn Ā· Pull Request #10490 Ā· ethereum/EIPs Ā· GitHub