Q4. Accepting points in compressed form adds extra overhead to every call, but allows caller to work with compressed coordinates. Is it worth it?

Q5. Cleverness allows you to use this precompile for modular square root. Is it possible to extract a modular inversion?

Q6. While a transaction has no assumption of privacy, this could be an issue when used in eth_call. A constant time implementation also allows more code-reuse with applications that require more side-channel protection such as wallets. Should we implement constant time algorithms?

Q8. Is it worth exploring some hash to a curve point function or a generic pairing function?

We do currently collect some precompile ideas under https://github.com/ewasm/ewasm-precompiles/issues, but since the above proposal doesnâ€™t utilise any ewasm-specific behaviour (such as the potential to avoid using the current ABI encoding), Iâ€™d suggest the best would be proposing this as an EIP, which can be implemented and tested on ewasm. The other newly implemented precompiles in the ewasm repository all have a corresponding EIP.

This seems to be an outline, if fleshed out it would become an EIP.

A (the?) major use case for this precompile, if I understand correctly, is for doing Pedersen commitments inside contracts, cheaply. So it would supersede a pedersen commitment precompile.

I created a Draft EIP and made a pull request. It links back here for discussion.

@axic since the above proposal doesnâ€™t utilise any ewasm-specific behaviour (such as the potential to avoid using the current ABI encoding), Iâ€™d suggest the best would be proposing this as an EIP, which can be implemented and tested on ewasm

Yes, I realized this and I donâ€™t want to add technical risk to this proposal by making it unnecessarily dependent on eWASM.

The current implementation strategy is to use the excellent Rust libraries developed by ZCash and have both native and ewasm as a compilation target. We can then test and compare. It is then up to Node developers to either use the Rust library, native library, webassembly library, or develop their own.

@cdetrio A (the?) major use case for this precompile, if I understand correctly, is for doing Pedersen commitments inside contracts, cheaply.

The goal of this EIP is to subsume a class of problems, including but not limited to

Thank you very much for writing this down! I agree that perhaps dependency eWASM is too early to build in.
I think the next steps would be to build a proof of concept and generate test cases for this precompile.

Iâ€™m going to work with @axic on updating 233. I think we want to see a PR against 1679 to have a â€śproposedâ€ť section that includes your EIP. The wiki can be a placeholder for this or you can go ahead and do the PR now.

As we discussed, how you plan to implement â€” eg a single C or Rust or ??? implementation that can be linked into clients â€” might be good to hear. Client devs will have opinions

There will be a reference implementation in Rust based on the existing libraries (in particular those by ZCash and The Matter Inc.).

The reference implementation will be production grade and compile to a native library with a C api and a webassembly version. Node developers are encouraged to use the reference implementation and can use either the rust library, the native C bindings or the webassembly module. Node developers can of course always decide to implement their own.

I started work on a simple implementation for EIP-1829:

Hopefully I will generate some test cases which can verify the functionality and ensure that implementations match each other.

How much would it benefit Pedersen commitments for rollup, to do this computation inside the precompile?

Hash To Curve would be good to add to this builtin, as while you can do Y coordinate recovery using the precompile it would REVERT if you gave it an invalid parameter X coordinate (something you may have to do in a loop several times) - so doing it in practice would be annoying / tedious from Solidity.

It isnâ€™t possible to use this builtin for pairing operations, and I donâ€™t think it should be extended for that use caseâ€¦

Idea: The largest prime less than $2^{256}$ is $2^{256} - 189$. This means the top 189 values can not be valid field elements and we could use them as flag values for the point at infinity, compressed points, or number of tries for try-and-increment value-to-point mapping.

Modular square root only has efficient deterministic algorithms when $p = 3 mod 4$ or $p = 5 mod 8$, which applies to nearly all curves. The algorithms are simple and fail gracefully when $p$ is composite.

There are no simple algorithms for other values of $p$.

I suggest we disallow any form of point recovery when neither $p = 3 mod 4$ or $p = 5 mod 8$.

@vbuterin : The thing that has always stopped me from proposing a similar thing is, what happens if someone submits a non-prime modulus? There are different ways to implement elliptic curve addition and multiplication, and in general I donâ€™t expect that the different libraries have been tested to ensure that they fail in the same cases when the modulus ends up being composite, and this seems like a large and deep potential source of consensus failures. Unfortunately primality tests that take into account weird edge cases like Carmichael numbers but are also clean and efficient donâ€™t really exist.

So if the modulus is composite you end up with a Ring instead of a Field. The only difference being that inversion fails on more values than just zero.

Elliptic curve linear combinations, when implemented in the standard efficient way using Jacobi coordinates use only Ring operations (i.e. no inversion). So the whole computation is safe. What will go wrong is converting back to Affine coordinates, which requires a single inversion. If this inversion fails, this would normally indicate the point at infinity. No new control flow. No edge cases being hit harder than otherwise, except for the last one.

Garbage in, garbage out still applies of course. Throw in a composite modulus and the thing you are computing is not guaranteed to be an elliptic curve.

Existing libraries do not seem particularly useful here. Most are designed for a particular curve. The remaining ones are mostly for research purposes and are not security hardened. (They would rely on probabilistic algorithms like Miller-Rabin or Tonelliâ€“Shanks).

The current implementation strategy is to develop a Rust library specific for this EIP which is generic and efficient, but only uses deterministic algorithms. This library will be available as Rust, C bindings and a eWASM module.

Currently the design allows you to use compressed coordinates as input. This gives developers some advantages, but requires us to compute modular square roots. Modular square root has similar issues as inversion. Simple deterministic procedures involving only Ring operations exist when p = 3 mod 4 or p = 5 mod 8 . Most field moduli used in practice satisfy this by design. My current thinking is to simply reject compressed coordinates when the field modulus does not satisfy and make compressed points optional.

Lastly, thereâ€™s a feature request to do a form of hash-to-curve point. I think this can be incorporated elegantly and safely as well. Iâ€™ll update the EIP soon to incorporate these ideas.

The evmc-vm crate will provide an API for â€śVMsâ€ť to implement. @chfast and I are considering to provide precompiles as EVMC modules to be used with aleth. If this works out, it would be possible to also provide EIP1829 as an EVMC module.

Ah, I see. If the edge cases are that manageable, then I suppose it could work.

I think py_ecc might actually be generic enough to implement this fairly easily, or at least with a few tweaks. So thereâ€™s at least one other thing that the rust library could be checked against.

In addition to the separate Telegram chat I also post a link here, cause now this implementation is at the point where more help is needed with test vectors and pricing, then in actual implementation.

Was just looking through the EIP. I couldnâ€™t quite figure out what the gas cost calculation/specification is? Could you clarify that in the EIP â€“ the eip should be a full specification, and itâ€™s currently lacking that.

Where EIP1829 is a superset of EIP665. Based on meeting feedback, my current understanding is that:

EIP665 is ready-to-go but not obviously worth including.

EIP1829 is not ready-to-go but is reasonably likely to be accepted at some future point.

Although it would obviously be welcome, ENS doesnâ€™t need this to go into Istanbul. Ergo, if itâ€™s plausible that EIP1829 will go in at some point, we have no need for EIP665 and ENS will patiently wait for EIP1829.