When an EOA wallet is upgraded to a AA, some contracts (maybe some defi protocols) may refuse the contract address to interact with, I think it may cause some problems, for example, the user can’t withdraw assets from these protocols.
Why? Once migrated to a contract that does use the private key, per EIP-3607: Reject transactions from senders with deployed code, he will not be able to send any transactions from it. They could meddle with permit, but they could construct a signature in such a way that people may be certain that no one was a private key, e.g. use
keccak(x) in order to obtain it.
I am raising that maybe it is worth to also add an opcode “CREATE_COPY” or something like that. Obviously, it is out of the scope of this discussion, but rather a logical continuation to prevent users from using hacks.
I feel like today’s users (myself included) are very early on the multichain learning-curve, and today’s dapps currently rely on lots of
permits in the wild. Considering the footgun potential of private keys not getting destroyed, or already having been exfiltrated unbeknownst to the user making this upgrade, the messy long-tail of those private keys getting used for replays, on other chains, in ECRecover, etc. all seem to need some kind of mitigation. If even Yoav can’t think of any mitigation other than UX and education, I’m not optimistic any exist!
I like the emphasis here on good UX patterns and “recommendations” (from protocol decision to dapp designers and wallet designers, and then from dapps to end-users) being adequate mitigations, though. Dapp designers could, for example, point EOA users to check their own accounts against, say, these kinds of self-audit tools to close out permits and authorizations before migrating, or similar self-audit/self-education resources for cross-chain liabilities and auditing.
This could be a great example of a “Security Considerations” section that gets more detailed (and crowd-sourced) after the core technical design is stable and the EIP is in
review status. Once we get to that point, maybe the Secure Design WG hosted by CASA could contribute a little language that would be meaningful to UX pro’s?
Encouraging deployment by “creating a pk + fund it + deploy contract using 7377” is a risk, which is really bad if you don’t change the behavior or ecrecover. The idea that DeFi protocols may be deployed that way, and that you could have doubts about the security of the key that was generated for that, is worrying.
I’m also worried that “cheap storage” might be a synonymous to “underpriced storage” … and that any underpriced operation could be an attack vector.
Do you have pointers to protocols that protect their withdraw functions with
require(msg.sender == tx.origin) or
require(address(msg.sender).code.length == 0) ?
Since there is a good chance this will spread across a fork or two, could you
- parameterize the transaction type
- provide a notional SSZ layout
- put in some language that the tx body for hashing and encoding will follow whatever the current encoding conventions of new TXes are?
I want to give SSZ transactions space to be able to ship in prague or osaka.
Nope, I just proposed a possibility.
Yes, this is my main concern, given the “roll-up centric” roadmap.
Next, for each tuple in tx.storage and the sender’s storage trie, set storage[t.first] = t.second.
Would someone be kind to explain why do we need to do the above, after setting the code?
Manipulating transaction origin
Many applications have a security check
caller == originto verify the caller is an EOA. This is done to “protect” assets. While it is usually more of a bandage than an actual fix, we attempt to placate these projects by modifying the origin of the transaction so the check will continue performing its duty.
Relative lay man here. Let me see if I get this straight:
Typically a check of require(msg.sender == tx.origin) is intended to ensure that the caller has no code and, thus, can’t pull off various types of smart contract trickery (don’t quiz me) throughout the transaction. Correct?
In this case, the migration transaction sets itself to some existing code (which may have a require(msg.sender == tx.origin) check), then, using value and data, calls this code/itself in some entirely customizable way.
The issue, then, is that now that within this call, the msg.sender is now equal to tx.origin (the require(msg.sender == tx.origin) check passes) but code exists, heretofore prevented.
So when you say “we intend to placate by modifying tx.origin” what exactly are you proposing? What will tx.origin be in this single migration transaction? From what I can tell, the EIP doesn’t actually say.
And going forward, am I right in assuming that, in your proposal, the migrated EOA can never be tx.origin again? (Due to EIP-3607, I guess, since it now has deployed code?)
Also, why is it good/necessary to allow the migration transaction to call into the code it has made itself within the same transaction?
Under “Processing” the EIP says:
Now instantiate an EVM call into the sender’s account using the same rules as EIP-1559 and set the transaction’s origin to be
tx.origin is set to a hash of the address and is thus different from the address which is potentially used as
This seems reasonable to me
I’m unclear about the Manipulating transaction origin section:
Many applications have a security check caller == origin to verify the caller is an EOA. This is done to “protect” assets. While it is usually more of a bandage than an actual fix, we attempt to placate these projects by modifying the origin of the transaction so the check will continue performing its duty.
Is this change only within the context of the migration transaction, or will
caller == origin for any call received from the migrated account in the future?
Yes, but many ERC-20 tokens accept permit signatures as valid authorization mechanisms – thus the note about
ecrecover in the security considerations.
Yeah PRs absolutely welcome if you want to take a stab.
This is done to mimic what initcode does. It does some computation to figure out what storage slots to set, potentially manipulates the code to deploy, then returns the code.
Yes this happens during a migration transaction. It is not necessary afterwards because the account will no longer be able to originate a transactions, per EIP-3607.
I agree about not further enshrining EOA. But I believe this was an argument against 3074 adding features to EOA rather than migrating it. An opcode that is only useful for migrating away from EOA can hardly be seen as enshrining EOA.
It does enshrine ECDSA for the migration process, but so does 7377 or any other form of migration, since ECDSA is the only way to prove ownership of the EOA. At least it means that this ECDSA will be used one last time and become useless on the current chain.
Would it make sense to have two equivalent EIP candidates, one with transaction type and one with an opcode (like 5003 but without 3074, just the migration part), with everything else being equal except for the invocation method? Core devs can then compare the two approaches and pick the one that makes the most sense.
That’s a good motivation, if we can make it safe and prevent it from becoming just a cheaper way to deploy contracts.
Could we achieve a similar result by saying that SSTORE is priced 15000 during the top level context of this transaction type (so it can only be SSTOREs within the account, not 3rd party)? The transaction type already has cheap SSTOREs in the account, combined with code execution, so it seems equivalent.
I don’t think it’s tooling. Slots are deterministic but with mappings you can’t know them unless you know the keccak preimage. E.g. a Gnosis Safe gets initialized and you see 21 high slots set to “1”. Are they 21 signers whose keys you don’t know/remember? Or maybe 20 signers and a module address that could exec anything in the Safe? No way to derive this info from the deployment transaction, so you can only verify it if you know the initial configuration which was calculated off-chain.
Here’s an old demo Safe I deployed, which has 21 signers but you have no way to know who they are or whether some are modules rather than signers. (Hint: 20 of them are the default hardhat keys, e.g. 0xf39fd6e51aad88f6f4ce6ab8827279cfffb92266, but you couldn’t derive this from on-chain information if I didn’t tell you).
As a user, I’d be more confident about my account if I can verify it on-chain and not rely on my memory of the time I deployed it. And it becomes more important when dealing with a DAO multisig where the original person who deployed it may no longer be around.
I share that concern. Having a migration path is more important. We’ll have to solve tx.origin at some point if we want AA to become 1st class citizen, but it doesn’t have to be in this EIP if it becomes a roadblock.
If the key is the only problem, it can be addressed as you suggested above, using a variation of the old “Nick’s method”. Anyone would be able to verify that the contract doesn’t have an ECDSA key by verifying that the signature is human readable.
But will core devs be ok with having a cheap storage method during contract deployment?
Right. Maybe a dedaub audit similar to that of tx.origin could confirm that it’s safe. Basically, replay all the transactions while adding 2600 to the cost of the ecrecover precompile and seeing if the outcome has changed in any way other than the sender’s eth balance.
Correct, and often doesn’t achieve its goal. Some projects added this check as a knee-jerk reaction to flashloans when they came out. But a miner (or now block builder) could always perform the same attack by bundling transactions to bring liquidity, perform the attack, and pull the liquidity out.
The EIP suggests making it a hash of the original sender.
For now… But a future EIP for native account abstraction will likely make it possible for smart accounts to have their own tx.origin. It’ll be bad practice to assume otherwise.
To complete its setup. It might need to perform sanity check (and revert the migration if needed), or to perform a call to a 3rd party contract such as creating an ERC20 allowance to some paymaster that it’s going to use from now on.
Or simply as a UX improvement: the user is migrating the account in the context of doing something with the account. Why require signing twice (for migration and then for an operation) when you can sign just once.
We’ve discussed the risks of ecrecover and that they could be mitigated by making ecrecover revert if the recovered address has code. But here is another potential attack or backdoor vector that should be mentioned. Attacker creates an EOA X and approves account Y to spend X’s balance of some ERC20 token, they then use a migration transaction to deploy a protocol at address X, the allowance stays dormant and is not visible in the verified source code, but it can later be used to steal tokens in the protocol.
This can be mitigated by looking at transactions sent by X prior to its migration, before trusting the verified code. But now consider that instead of approve, the attacker can sign a permit instead and submit it using a different account. X does not have any transactions prior to the migration, and the only way to find this backdoor is to look at the allowances of X in the relevant tokens. The proposed ecrecover changes don’t mitigate this because the signature is used before the migration.
Due to these issues, if the EIP is adopted as currently proposed block explorers might do well to show a warning on all contracts deployed via migration. This might be a good deterrent so that the new tx type isn’t used to deploy normal contracts. Would such a warning be acceptable for user accounts that were migrated?
Something that came up also.
Some contract implement operation that are targetting a fix address, with the assumption that it is not transferable. For example vesting contracts may have an immutable recipient, and in some cases, people may require that the target of the vesting is an EOA (by requiering some signature) to make sure that the recipient has no way to sell its future (currently vesting) tokens.
If the EOA has an ability to deploy an ownable, transferable, account contract this assumption is broken. Honestly any AA migration workflow would break that assumption. So the issue is not specific to this EIP.
TLDR: AA migration with challenge assumption that were made (and are probably still made) about EVM behavior. People making these assumptions don’t expect that to change for years. If this EIP goes trough, this will have to be documented and shared.
This highlights the weirdness of adding special cases to ecrecover: it makes ecrecover no longer a deterministic function: a valid signature becomes “invalidated” by deploying a contract.
I could envision this creating an attack vector in a system than handles off-chain signatures before settling on-chain, such as payment channels.
Yep great point, completely nukes the possibility of nerfing Thinking about this more, there is nothing stopping someone from giving an off chain signature to someone now and then just draining the account. A signature as collateral is only secure if the funds are encumbered by some resolution mechanism (like a payment channel).
The interplay with EIP-1271 signatures is also interesting and something to consider.
In order to avoid an unnecessary cold account access in the (currently) common case, some implementations of EIP-1271 signature validation will first try ecrecover and only then try calling
isValidSignature (example here, and pretty sure I’ve seen it elsewhere too). But if you migrate your account to a contract you would want the contract EIP-1271 signatures to take priority, and this code will not do that.
It seems pretty clear that ecrecover should not change though. We can implement contracts so that EIP-1271 signatures take priority (which btw are already revokable so EOA migrations do not add a new attack vector), but existing contracts that either use ecrecover only or EIP-1271 with the pattern I described above will be unaware that an EOA was “decomissioned” via migration.