Should consensus-layer protocol presets be exposed to the EVM?

Posting at the pre-draft stage to test whether this idea has merit before considering whether to develop a full EIP draft.

Background. Since EIP-4788, EVM contracts can verify dynamic BeaconState fields via SSZ proofs against the beacon block root. They cannot verify the protocol presets themselves (MIN_SLASHING_PENALTY_QUOTIENT, BASE_REWARD_FACTOR, EPOCHS_PER_SLASHINGS_VECTOR, and so on) because those values exist only in CL client source code, not in any state field. Any contract that needs them today either hardcodes them or maintains a fork-keyed table updated by governance (EigenLayer’s BeaconChainProofs is the canonical example). Both options reintroduce a trusted party. Removing it requires redeploying the contract at every fork that touches a referenced value, which fragments liquidity and breaks integrations. For protocols holding staked collateral this is not a viable path.

My question. Has there been discussion of adding a small commitment to BeaconState so that contracts could verify presets the same way they verify any other state field? Something like:

class BeaconState(Container):
    # ... existing fields ...
    preset_root: Root  # hash_tree_root of the active Preset container

In consensus-specs terminology, “preset” refers to the layer of penalty/reward/churn parameters, as distinct from per-network config values. The field would be updated atomically at fork transitions inside process_fork and appended at the end of BeaconState to preserve existing generalized indices.

CL-only change. No execution-layer changes are required. Contracts verify preset_root against the beacon block root exposed by EIP-4788 using the same SSZ proof pattern used for any other BeaconState field.

The motivation. I ran into this trying to build a trustless dapp that needs to subtract a validator’s maximum possible penalty over a given time window from their effective balance. Without access to the relevant presets, there is no way to compute that bound on-chain without hardcoding the parameters or introducing governance, both of which break the trustlessness of the dapp. An on-chain light client that needs SYNC_COMMITTEE_SIZE to verify sync aggregates has the same trust surface.

EIP-8198 (Quick Slots), currently drafted, is a recent example: a single slot-duration change rescales BASE_REWARD_FACTOR, INACTIVITY_PENALTY_QUOTIENT_BELLATRIX, and several other presets. Every contract that hardcodes any of these values must be redeployed.

More broadly, Ethereum’s most defensible advantage is that staked capital can be used as trust-minimized collateral for slashing insurance, lending against validator yield, issuance of treasury-like products, and similar applications. No other chain can offer this natively, because no other chain has both a large native staked base and the EVM-level expressiveness to build against it. Closing this specific gap removes a trust surface from validator-collateral applications specifically, and is the kind of small, well-scoped change that unlocks application-layer progress without requiring further protocol-layer redesign.

I’d like to know whether this has been proposed before, and if not, whether there is a reason experienced CL researchers would consider it disqualifying.

1 Like

This is an interesting direction - especially the goal of removing trusted assumptions around the preset values.
It does feel like this is a part of a broader pattern:

we keep exposing more internal state so contracts can reconstruct what happened in a trust-minimized way.

in your example, even if presets are committed into state and provable via SSZ, a contract still has to interpret those values to compute something like a validator’s maximum penalty.

That means a verifier is still relying o:

  • the execution context
  • how that state interpreted
  • the logic used the derive the outcome

so this improves access to data - BUT - verification of a specific result (this penalty bound is correct) is still a reconstruction problem.

curious whether improving state access is sufficient here, or is this points to a missing layer for verifying specific outcomes independently of the system that produced them.

1 Like

@Damonzwicker thanks for engaging with this.

You’re pointing at a structural gap, not just an implementation detail. Even with perfect preset access plus every relevant BeaconState field merkleized, a contract computing “validator X’s maximum penalty” is still a re-implementation of the CL spec in Solidity. The two run in different VMs with different semantics (integer rounding, division order, overflow behavior), and the contract version has to be maintained separately as the spec evolves. No amount of additional state exposure closes that gap.

The full fix lives in ZK proofs of CL state transitions, where a contract verifies “the CL applied penalty X to validator Y” instead of recomputing it. At that point data access and outcome verification collapse into the same thing. We’re not there yet, and I don’t think it’s reasonable to gate near-term improvements on getting there.

Within the re-implementation paradigm we currently live in, I’d split the trust assumptions into three layers by frequency:

  • Constants drift: every fork or two. PROPORTIONAL_SLASHING_MULTIPLIER went 1 → 2 → 3 across Phase 0 / Altair / Bellatrix.
  • Formula structure drift: one major instance in five years (Altair introducing the inactivity score system). Post-Merge the formula structure has been mostly stable, with Electra adding compounding-validator scaling.
  • Beacon state shape drift: rare and usually SSZ-versioned.

preset_root collapses the first layer. That doesn’t make a contract fully trustless, but it removes the most frequent and most fixable trust assumption on the path. The surface shrinks from “constants AND formula AND state shape” to “formula AND state shape.” For non-upgradeable contracts that’s a meaningful improvement, even though zero trust isn’t reachable without ZK CL proofs.

The cost looks small (no EL changes, one CL-side field populated at fork transitions). The upside is a category of products (validator-yield collateral, slashing insurance, restaking) that today require oracles or governance trust to operate.

So my read of the proposal isn’t “this makes Ethereum trustless for validator-collateral apps” because no, it doesn’t, and the path past re-implementation runs through ZK. My read is “this collapses the highest-frequency trust assumption on a path we’ll keep walking incrementally.” Net-positive regardless of how far we still have to go.

1 Like

This is a clean framing of the problem space.

I agree with your split:

constraints drift
formula drift
state-shape rift

That distinction is exactly why preset_root is valuable even if it does not solve the full verification problem.

it does not make the contracts derived result trustless. The contract still has to implement the formula correctly, preserve CL semantics, and keep pace with structural changes.

But it removes the weakest and most frequently changing dependency: uncommitted constraints.

Presets aren’t just “configuration”—they become implicit consensus assumptions inside application logic. If those values change and the application cannot verify them from consensus, trust is reintroduced exactly where the system is trying to eliminate it.

So I’d frame this as

preset_root does not prove the outcome.
It makes a critical input to outcome verification consensus-addressable.

That’s a meaningful reduction in trust surface.

Longer term, I agree the end state is proving CL-derived outcomes directly (e.g., via ZK), where contracts verify results instead of re-implementing them.

Even in that model, though, the result itself is still produced and verified within a specific system. A remaining question is how that result becomes independently verifiable outside that context.

Near-term, committing presets into BeaconState looks like a well-scoped step that improves guarantees without expanding protocol complexity.

The main question I’d raise for CL folks is whether introducing preset_root creates any non-obvious complexity around fork transitions, versioning, or SSZ stability—and whether those costs meaningfully outweigh the benefit of collapsing constant drift.:

Hey @tomoglava — your framing here pushed my thinking further, so I wrote up a pre-draft expanding on the gap you’re pointing at:

:backhand_index_pointing_right: https://ethereum-magicians.org/t/pre-draft-toward-a-standard-for-portable-verification-of-execution-outcomes/28399

This line in particular stuck with me:

“No amount of additional state exposure closes that gap.”

That clarified something important: the issue isn’t just what data is available, but that verification of outcomes still requires reconstructing them within the same system (or an equivalent one).

I tried to generalize that into a broader question:

even with correct data and proofs, do we actually have a portable, system-independent way to verify a specific execution outcome as a standalone claim?

My read is that today we don’t—verification still reduces to:

  • re-executing logic, or

  • relying on a system that already did

So I framed it as a potential missing layer: a verification boundary between execution/data and independently verifiable outcomes.

Would really appreciate your take on whether that framing resonates, or if I’m stretching the implication too far.