Oh, sad. It seems another chicken-and-egg problem for requesting a precompile!

It seems a pattern that whenever a precompile is proposed, there will be a lot of debate about whether there is likely to be a lot of usage. The chicken-and-egg nature is that without deploying the precompile, we can’t calculate the adoption, and without the adoption, it’s hard to argue for making it a precompile and thus lack of interest

Good news, we are propose to have a “progressive path towards precompiles”, where author proposes a precompile, but instead of requiring it to be deployed as precompile first, they deploy a precompile candidate which is a traditional smart contract (with higher gas cost per operation, though), and see if there is usage. Once the usage is validated, the data can be used to (1) justify the need for such precompile, and (2) serve as call pattern to help client implementor tune performance

I can see that the EIP needs a champion to take it through, hate to ask the obvious but do any of the original EIP authors intend to take on that role (courtesy ping to @shamatar@ralexstokes), failing that, could we have a long-standing community member take on that role (maybe @axic@matt@maxsam4), failing that, I would be happy to take on that role.

hi @QEDK ! I intend to push for inclusion of this EIP in the next hard fork after dencun

the core dev process hasn’t really focused on planning this next hard fork yet so there hasn’t been much activity as we all focus on the deneb/cancun EIPs

I just Implemented a KZG poly-commit to replace a Merkle Tree system though the interpolation, commitment, and proof generation were done in Rust, coming to write the verification contract now and just seeing this precompile has not been included yet

woohooo! any word on whether hash_to_curve will be included?
I see there’s an EIP for hash_to_curve for BN254 (EIP-3068: Precompile for BN256 HashToCurve Algorithms), but it seems to have gone stale.
Trying to use BN254 has proved pretty fruitless without hash_to_curve and I’d be happy to help make it happen

Is there any reason not to add BLS12_GTADD, BLS12_GTMUL, and BLS12_GTMULTIEXP as well? Additionally, why not have both BLS12_PAIRING_RESULT which returns the actual pairing result and BLS12_PAIRING which checks that the pairing gives the identity. (Even SUB may be wise to add, but it’s non-essential.)

All of these are a rather trivial addition when it comes to implementation details, and this is the logical time to add them rather than waiting another 3 years or so. Personally, I want it, and it feels like the right decision to have actually full-blown support rather than just partial support for the curves.

@ralexstokes It’s great to see we might finally get this EIP in!

A few comments:

it might be worth considering updating its status from “Stagnant” back to “Review”, as mentioned here. Only you or another author can do it as per EIP-1:

Stagnant - Any EIP in Draft or Review or Last Call if inactive for a period of 6 months or greater is moved to Stagnant. An EIP may be resurrected from this state by Authors or EIP Editors through moving it back to Draft or it’s earlier status. If not resurrected, a proposal may stay forever in this status.

There is a PR currently open around it to correct gas costs to match the existing ones found in geth. It requires an author’s approval too. Current gas costs were set based on a Rust implementation, IIUC, but it seem the Go one is a bit slower.

I believe the EIP should explicitly say that the BLS12_MAP_FP_TO_G1 and BLS12_MAP_FP2_TO_G2 precompiles are clearing the cofactor (I know it’s specified in its own document, but having it in the main EIP would make it clearer).

Finally, given there’s both BLS12_MAP_FP_TO_G1 and BLS12_MAP_FP2_TO_G2 precompiles in this EIP, and I see it says:

Mapping function does NOT perform mapping of the byte string into field element (as it can be implemented in many different ways and can be efficiently performed in EVM), but only does field arithmetic to map field element into curve point.

It might be worth providing an example solidity hash_to_field as well to showcase it’s actually efficiently performed in EVM? Does anyone have one?
Because I’d expect most users to have to rely on hash_to_curve rather than just map_to_curve.

@CluEleSsUK@JayWhite2357 I agree these changes could improve the UX for the precompile and allow more applications. To summarise the suggestions:

Implementing hash_to_curve as specified in the RFC standard

Currently only map_to_curve is supported, for most applications this would be used to implement hash_to_curve

Encourages use of the correct hashing technique, rather than the more obvious but less secure method to hash and multiply by the generator (not suitable in the random oracle model)

Standardises the hashing technique (and therefore BLS signatures) used in contracts

Introduce a new operation BLS12_PAIRING and change the existing operation with that name to BLS12_PAIRING_VERIFY, and add operations on the group GT: BLS12_GTADD, BLS12_GTMUL, BLS12_GTMULTIEXP

Currently the pairing operation can only be used to verify if results in GT are equal, this is useul for BLS signatures

The new version of BLS12_PAIRING would return the element of GT

Supporting operations on GT would increase the applications of the precompile beyond signatures

This functionality is useful for identity based encryption (Boneh-Franklin scheme), attribute-based encryption, functional encryption

Hash to field is already working nicely and passes all tests from the rfc. Costs currently 30.000 - 45.000 gas so could be called “efficient”.

Two issues remain. I can’t test the hash to curve implementation as foundry doesn’t implement the precompiles yet and I found no simple way to setup a local node with them activated to test using hardhat.
Also I can’t clear the cofactor yet for the g2 implementation, as the scalar I need to multiply by is way to big for the precompile. I started looking into the Budroni-Pintore optimization, but I fear it will be very expensive gas wise.

Hi all, there was some request above for exposing the BLS12-381 pairing product, and after some discussion with core devs and others, I’m currently leaning towards leaving them out.

I’m currently adding EIP-2537 support to Constantine and at the same-time providing metering for the worst-case scenario using the optimized constant-time routines and detailed operation counts.

However, while subgroup checks are mandatory for pairings, they are not for:

elliptic curve addition

elliptic curve multiplication

multiscalar multiplication (MSM)

For addition, this is not a problem for Jacobian coordinates, however if using Projective coordinates, in particular the complete formulas from Complete addition formulas for prime order elliptic curves there might be an assumption of being in the prime order subgroup. This needs test vectors with a point not in subgroup.

For multiplication, endomorphism acceleration is widely used in BLS12-381 libraries, for a minimum of ~30% speedup. It will give wrong result if the point is not in the correct subgroup.
The spec needs to spell out that implementation should not use endomorphism acceleration.
(Or that endomorphism acceleration is mandatory, and a test should be done to check that all implementations, give the same wrong result.)
This needs test vectors with a point not in subgroup.

For multi-scalar multiplication, we will have the same issue as scalar multiplication. This needs test vectors with points not in subgroup.

To be clear because 33% one way is 50% another, it’s 50us vs 75us per operation on my machine.

As an aside, the EIP should pick either the additive notation or the multiplicative notation but not mix both, i.e. replace the name “multiexponentiation” with the “multi-scalar-multiplication” (MSM).

I can open 2 PRs for what I think is the sensible choice: