Hi all, over the past few weeks I’ve been working on a Rust implementation of Falcon-512 signature verification in a fork of revm-precompile the precompile library used by Reth (https://github.com/paradigmxyz/reth). The goal is to help support and de-risk EIP-8052 by providing a concrete, test-backed implementation that aligns with the current draft spec and to help serve as a practical step forward for post-quantum signature support in the Ethereum client ecosystem.
The implementation follows the modular split between Hash-to-Point and core verification, with current support for the NIST-compliant SHAKE256 Hash-to-Point path. It is structured so it can be cleanly wired into precompiles once addresses and remaining spec details are finalized. I’m sharing it now both to make the work visible and to sanity-check and correct a few consensus-critical details before pushing further upstream.
The implementation fork is here:
https://github.com/mindlapse/revm/tree/falcon/crates/precompile/src
At a high level, this includes:
- A full Falcon-512 verification pipeline matching the EIP’s split between Hash-to-Point and core verification.
- Integration into
revm-precompile, with fixed-cost precompile semantics aligned with existing precompiles. - Known Answer Test coverage using the official Falcon submission package (
https://falcon-sign.info/), plus ~180 additional unit tests covering encoding, NTT/INTT behavior, norm bounds, and failure cases.
Unit tests and KAT verification can be run with:
cargo test -p revm-precompile --features falcon
Current status and scope
- The Falcon core verification logic and SHAKE256 Hash-to-Point implementation are complete and passing KATs.
- The code is wired internally but not yet mapped to concrete precompile addresses, since addresses are still TBD in the EIP.
- The precompile entrypoints exist as callable APIs, but are not yet registered in any default fork set.
- The Keccak-PRNG Hash-to-Point variant is not implemented yet.
Before proceeding further, I’d appreciate clarification on a few spec details that affect consensus-critical parsing.
Challenge polynomial padding and length
The EIP states that the 512 coefficients (14 bits each) are concatenated and then “left-pad the final byte with zero bits to reach exactly 897 bytes.” Since 512 × 14 = 7168 bits = 896 bytes exactly, this wording is ambiguous. Could you confirm:
- Whether the challenge encoding is intended to be exactly 897 bytes, and if so,
- Whether the extra byte is a leading zero byte (i.e., the first / leftmost byte of the 897), or whether padding is expected elsewhere.
Public key header handling
The EIP specifies that the public key is 897 bytes in the NTT domain. In the Falcon reference implementation and KATs, the public key encoding consists of:
- a 1-byte header (
0x09for Falcon-512, since log₂(512) = 9), followed by - 896 bytes encoding 512 coefficients at 14 bits each, ordered from coefficient 0 to 511.
In the current EIP text (and in the implementation I’ve prepared), the 896-byte tail is interpreted as a packed bitstring of 512 coefficients mapped to the NTT domain in [0, q). Could you clarify:
- Whether the precompile should require the first byte to be exactly
0x09, - If not, whether the first byte should be required to be
0x00, or whether any value should be accepted.
At the moment, the implementation assumes the public key provided to FALCON_CORE is already in the expected NTT-domain packed form, and this detail determines whether or how the first byte should be validated.
Finally, a couple of additional clarification points that may be worth addressing explicitly in the EIP text, based on implementation experience:
-
Input concatenation order for FALCON_CORE: the EIP lists signature, public key, and challenge as inputs, but an explicit byte-level layout (e.g., sig || pubkey || challenge) would remove ambiguity across clients, since precompiles simply receive a single array of bytes as input (and a gas_limit).
-
Gas semantics on malformed input: the “Gas burning on error” section states that malformed inputs or decompression failures should burn all gas supplied to the call, which appears to differ from existing crypto precompiles such as
ECRECOVER, where a fixed cost is charged when callable and failure is signaled via empty output. Before locking this into client implementations, could you confirm that burning all remaining call gas in these cases is the intended EVM-level behavior, and that this is meant to differ from the usual fixed-cost (i.e., bounded worst-case) precompile semantics, given the potential for unbounded gas consumption on malformed input?
Thanks for all the work that’s gone into EIP-8052 and for taking the time to review these questions. I’m very happy to iterate on the implementation, add tests, or adjust semantics as the spec evolves, and I’m keen to keep this aligned with how other clients are thinking about Falcon support.