I agree, it is not as simple as “Is it expensive or not”
For me it is a combination of, how much gas would be appropriate for such a call and how often would such a call be used.
Probably an argumentation for this should be added to the EIP. Something like:
Signatures are widely used (e.g. state channels, multi signature wallets and relays) currently all these signatures can not be properly protected against cross-chain replay attacks. As we assume that all these signatures will use this opcode it would be preferred to use an opcode, to be gas efficient.
I think you got the likelihood of use down, the alternative to the current design is that everyone either manually configures a constant in their deployed bytecode, or adds a deployment parameter which requires deeper state variable access to read. In aggregate, especially as these solutions get deployed more, it is probably causing a worse impact to the state growth of Ethereum than it would to query CHAINID through a new opcode.
I think it’s more reasonable to have a smaller chainId integer but since EIP155 did not specify it we now have a few chainId’s with over 64 bits so I think we should just use uint256. Also it would match the current EIP712 spec.
I think it’s largely decided to be an opcode, a pre-compile would not make sense for this functionality I do not think. The data type is an interesting question, I think it needs to be at least uint32, but uint256 (one machine word) would match the EIP-712 specification.
@fubuloubu: If a hard-fork does occur, then code on one chain won’t be replayed on another chain. @holiman: No… (…) We never change the chainids… are we? @fubuloubu: Like an actual contentious hard-fork, where there’s two communities. @holiman: Ri-i-ight… But… @fubuloubu: So, they would change theirchainid.
This is the core of the misunderstanding.
In an actual contentious hard-fork, both sides place claim on being “the one True chain” (refuse to let go of memetic artifacts: these are valuable); conceding and changing chainid is a move that weakens the claim, so it’s unlikely to happen.
As @fulldecent highlights, the issue has been side-stepped in the case of TheDAO split, because chainid was introduced after the fork1; no magic number was already more “valuable” than another, so any could be picked.
The proposal, as it stands, does not guarantee replay protection in case of a contentious hard fork. For that to be true (procedurally), a chainid must change; but, as Martin said, we have no procedure ATM to change chainid - neither who does it, or when. It has also never previously happened.
What it can be useful for is same-code same-address deployments across multiple chains.2 For example, ones that want to use ENS on both main-net and Ropsten.3 Or something like a cross-chain bridge…
On a more abstract level, there is no way to future-proof against “undesirable splits” without specifying exactly what to consider “undesirable”. Essentially, writing a fork oracle once the point of contention is known.4
A blanket condition like "chainid changes" won’t cut it, and - I’d argue - is counter-productive: both sides of the split will want to maintain contracts’ behaviour as it was before the split. This is no longer just a struggle for mind share, but now also a question of “how much of the ecosystem will act up?”
1 This was the first time that importance of cross-chain transaction replay protection on value-bearing networks was demonstrated. 2 This is a niche use pattern, and I haven’t seen many people do it; certainly not brand-name projects. 3 The ENS Registry contract is at different addresses; the TLDs are respectively .eth and .test for main-net and Ropsten; etc… 4It’s a lot of fun!
32 bits is insufficient for a chainId, for sure. Even without trying we would seeing unintentional collisions soon. Right now we are considering intentional choices of chainId (“today is nice weather, I think I will choose chainId 42 for my network and nobody else should.”). But in the future there will be many more chains and they will be created programmatically. So we need to be concerned with unintentional chains. The birthday problem says we only need 65k networks for that to happen. This will happen in the foreseeable future.
Regarding same code deployed at the same address on multiple chains
This is specified in ERC-1820 and is currently deployed.
Regarding what is a reference implementation
A reference implementation is some code in any language that is compatible with every other client on the network. Most importantly it is an identifier (a URN) that will not be confused with any other identifier. The easiest, decentralized way to do this is to publish reference implementation code and hash it. Ideally this implementation will isolate only the consensus part (validating blocks) and the hashing for-loop/P2P/storage will be a separate program/process/module.
Regarding contentious fork
This is the tongue-in-cheek explanation that best illustrates the problem with a contentious fork. Here is more detail on how an upgrade would work when a chainID = hash(code | genesis).
All use cases assuming current block is 1,000,000. The current chainId is 11af1af2989...
Use case 1: normal upgrade that everybody wants
Case study: Tangerine Whistle, block 2,463,000
Hudson publishes on Ethereum.org (could actually be anybody publishes anywhere) to upgrade your client for Pimpled Frog upgrade on block 2,000,000. New chainId uses same genesis and hashed with the new software is 22af2af3562...
All miners upgrade to run the new software
From 1,000,000 to 1,999,999 the CHAINID opcode returns 11af1af2989 ...
Truffle, MetaMask, Opera and more update to know about 22af2af3562...
End users sign transactions using 11af1af2989... AND optionally 22af2af3562...
Some transactions get included in blocks up to 1,999,999.
After 1,999,999 all the pool transactions that were signed with 11af1af2989... are discarded by new miners.
The old network continues to exist and process 11af1af2989... transactions. But nobody cares about it.
Use case 2: Contentious fund recovery starts a new viable network
Case study: DAO Fork, block 1,920,000
Some miners upgrade
Some end-user software upgrades
Some end-users DO NOT sign transactions using the new chainId
Some people continue to care about the old network
Use case 3: Aborted upgrade
Case study: Constantinople upgrade, block 2,675,000
Most miners downgrade to previous software before block 2,000,000
The new network is created but nobody cares about it.
Use case 4: Failed contentious upgrade
Case study: This could conceivably happen if Stiftung Ethereum, Zug (ethereum.org) publishes a recommendation to retrieve funds from the Parity wallet.
Some miners upgrade
The new network is created and some people care about the old one and some people care about the new one.
Yeah, this was primarily what I was thinking would be valuable.
I definitely see your point here, but this does leave a path for upgrade to systems that make use of this additional opcode. Since the opcode check (in the example EIP-712 use case) is for the present value of chainid that means any off-chain transactions from the point of update could re-target the new chainid, and all old messages would be unusable.
You are definitely right that this does not motivate a sustained fork to change their chainid, but the use of this opcode would allow for an “automatic upgrade” of that off-chain signing functionality versus an immutable value sent on deployment (or worse: a maintained value by the original developer). This may actually create a motivation by users of the opposing fork to convince operators of the new fork to upgrade, so that their application activities can be separated from their application activities. It’s a big game of chicken of course, but that added friction must get resolved one way or another.
This is all highly hypothetical of course, but interesting to think about.
I feel like this conversation may have again gotten a little off track.
This EIP is a net benefit because it aligns the domain separator protection of base layer transactions (e.g. an Ethereum transaction) with those of Layer 2 and metatxn signed messages. It avoids the human error of current solutions and is a directly applicable to a widely accepted and imminently useful standard (EIP-712) that is being implemented in multiple libraries and clients.
There doesn’t seem to be any technical concerns with this approach, we’ve fully specified the implementation, the topic has been discussed on ACD and recommended for inclusion into Istanbul. Are there any issues with moving the status to Last Call with the proposal in it’s current form?
The specification is technically complete and is eligible to proceed to Last Call.
We don’t yet know which implementations will use this. But I suspect that all of them will introduce problems compounding on the issues detailed above. Especially off-chain and layer-2 transaction applications.
I’m still not sure I understand this sentiment. Taking EIP-712 as an example, if chainId is used in the domain separator, then it currently has to be either a deploy time constant, or a parameter controlled by some trusted third party with higher access control to be upgraded. A contract’s code or deployment procedure also has to account for this nuance when deploying to different test networks and the main network, so it very obviously introduces human error into the process.
This proposal simply aligns extra-protocol message signing using chainId as a domain separator to in-protocol transaction signing that also uses it for the same purpose.
I agree with you that it is difficult to ensure chainId is updated on one fork in a contentious fork event, but that’s outside the scope of this proposal, and it also affects in-protocol signing in the same fashion. I would largely argue it has to be resolved one way or the other in this kind of event if the two forks are to co-exist peacefully as it’s the primary method for replay protection between chains.
By aligning off-chain and on-chain signing, there is more friction ensuring this eventually gets resolved.
First, everything we are saying is hypothetical because there are no deployed off-chain or layer-2 applications to study.
When they are available some will surely make the mistake of assuming that a chainID will not change ever. Similarly they will probably make the mistake of assuming that the consensus client will never change. These are unrelated but both demonstrate sloppiness.
My sentiment is simply that chainID currently has a known weakness. And this proposal is to weld the chainID onto the EVM.
At the same time, of course it is quite simple to deploy an oracle (using the same account on each network) to return the chainID. I would prefer this approach until applications are better understood.
How is it our job to babysit developers on how to use an opcode? The EVM is not a safe environment, everything is “use at your own risk”. You also don’t have to use the opcode if you don’t want to.
This seems like a poor solution to me. It relies on a trusted third party and uses an excessive amount of gas for a simple operation that is easily accessible contextual information already available in a transaction.
I don’t think this is good reasoning to block implementation of this opcode in the Istanbul fork. If the method of chosing chainId has a problem, that is out of scope of this EIP and we should not be making value judgements for what is ultimately an established part of existing standards just because it has “potential” complications in a few corner cases that don’t even affect any existing applications (as you already noted!). This proposal doesn’t make it worse than what it already is.
Imagine a scenario where this proposal wasn’t implemented, and an alternative like an on-chain oracle were in use. If the value produced by the oracle mismatched what the protocol says, this would be a potential griefing attack. EIP-712 messages signed with a chainId domain separator would use the RPC provides value to sign with, which now mismatches the chain. This means all messages that are signed would get rejected, and the oracle becomes a critical piece of infrastructure that limits the amount of value placed on any Layer 2 solution. This griefing mechanism exists even outside of a change to the value of chainId.
Conversely, this proposal implements it in the EVM as a feature, directly matching the protocol’s value with no additional trust required. Sounds like a safer option to me!
I propose an alternative
Instead of having an opcode that return the latest chainId, we should have an opcode that given a chainId as input return true if the chainId is part of the history of chainIds of that chain, false otherwise.
This is compatible with an hashing system like @fulldecent propose and it ensures offchain messages signed in the past still work in the future across future forks.
When there is a chainId updates wallet would still protect users by ensuring they sign with the latest chainId.
I think a lot of these discussions revolve around the fact that currently Dapps are not designed to take chainId into account.
Mostly because they have adopted the pattern recommend by Metamask to check for the net_version instead plus they rely on Metamask to refresh the webpage whenever there is a network change. Hence there are a lot of assumptions on how Dapps are developed nowadays.
Regardless we should build standards to allow these patterns to change and provide best practices about how to handle the state of a Dapp. For example, a good start is how EIP-1193 includes an event subscription for network changed.
Additionally Dapps should always check with the node the current chainId, using the eth_chainId method first introduced with EIP-695.
Given these progressive changes we will see the need for EIP-1344 become more apparent as we will design Dapps to track the chainId more closely. Especially for meta-transactions and layer 2 solutions.
As already stated above, it’s actually very important for using EIP-712 messages.