EIP-1153: Transient storage opcodes

I do think it will be easy to make pathological hard-to-analyze code here.

But I don’t think these types of examples are what people will be trying to do formal verification on, and I think you could make the same or similar examples using normal storage.

Double-edged sword I guess.


If the TSTORE opcode is called within the context of a STATICCALL, the call must revert.

This is different wording than how STATICCALL handles writes in static contexts as defined in EIP-214. Is the behaviour intended to be different?

1 Like

It’s not intended to be different, and I can adjust to this if it sounds more accurate:

If the TSTORE opcode is called within the context of a STATICCALL, it will result in an exception instead of performing the modification.

I’d certainly appreciate it, but I’m also a pedant ;3

1 Like

I don’t think the language semantics get much messier: this storage behaves the same as indexing a variable by the transaction. Now the transaction hasn’t appeared before, so symbolic analysis based on model exploration will have to change, but it doesn’t seem to me to be that bad.

1 Like

PEEPanEIP #91: EIP-1153: Transient storage opcodes with @moodysalem


So I spoke about this at some length with @moodysalem this week and I want to say I do support this proposal in principle. Here’s why: since transactions are the unit of atomicity in the EVM, it makes sense to have a data location which is also transaction scoped. It allows the developer to “reason about transaction atomicity”, which, as demonstrated by both the existence of reentrancy bugs and techniques to deal with them, is an extremely useful thing to be able to reason about. (In fact, one could argue that memory should have been transaction-scoped to begin with, although it’s a bit late for that).

By way of example, another use-case this enables is “critical sections” - scoped sections of code which, while entered, do not allow reentrancy into the contract at all (via checking a transient storage slot before entering the selector table). This is possible with regular storage of course, but it incurs the cost of an SLOAD at every single call to the contract.

So if you think of transient storage as a tool for reasoning about transactions, transient storage not clearing after every call might be a feature, not a bug, since you can trace information about a txn (ex. how many times a contract has been called in a particular txn – which, if I am not mistaken, is not currently possible with existing opcodes in the EVM). I do think that the concerns voiced about making it potentially harder to reason about contracts are valid! But maybe the complexity is a basic complexity of smart contract development that needs to be reasoned about anyway, and by adding this data location to the EVM we are just making it explicit.

I do have the issues with the API that I voiced above. I also think “transient storage” is a confusing name, since the scope of the data is much more like memory than storage as far as most programmers would be concerned - the whole point of the proposal is that data is never “stored” to disk. A better name might be “long-lived memory” or simply “transaction-scoped memory”. Lastly, I am unconvinced that this proposal is strictly better than other proposals which provide some sort of transaction scoping, for instance the TXID proposal from Nikolai (rest in peace). I haven’t considered the alternatives long enough. But this proposal may indeed be the happy middle ground in terms of usability and the use-cases it enables.

As far as language implementation goes, Vyper team is happy to support it at the language level. As has been pointed out, our existing implementation is a PoC. In principle, it works! And you can probably use it to try out the feature! But we will not officially release as a language feature or put the level of effort necessary for production until the EIP is scheduled for a fork.


I think using transient storage for “memory” mappings just because it has different addressing semantics from actual memory is an anti-pattern that can and will lead to the type of bugs that several people have raised concerns about in this thread. As I mentioned before, the lack of memory mappings in SC languages so far is a language restriction, not a VM restriction. You can see that C, C++, Python, Java, Rust, etc., all manage to implement map data structures with linear, not associative memory. As a language implementer myself - I would prefer that transient storage addresses the same way as memory, and to provide memory mappings as a language feature instead of having people fall back to transient storage mappings. Ultimately - transient storage should be used for things that require transient storage, and memory should be used for things that require memory. I think here is where the abstraction might leak, if developers are reaching for transient storage because of its addressing semantics instead of its volatility semantics.

1 Like

I’m not sure I really follow what these two messages say: I think the two points you want to make are that transient storage might not be the best name, and programmers reaching for this might be surprised by the semantics if they reach for it as an alternative to memory, so that we should change the transient storage semantics to match those of memory to avoid the temptation. I’d personally be happy to consider renaming to “transaction-scoped storage” if that conveys the intent better.

I don’t agree with changing the addressing model to match that of memory, especially not for the reason of mismatch leading people to misuse it. Fist compilers already have code generation that works with storage as addressed today, so it would be easy to take that code and have it output TSTORE TLOADs instead to interact with transient marked storage. Using something memorylike would be a higher lift, but I don’t maintain a compiler so happy to be corrected on this point. I do definitely want to be able to do storage like things like put structures in there. You might say in the future we can do it with memory-like addressing, but that’s not ready now.

I don’t think the temptation argument really works. If programmers want memory with storage-like addressing, the right solution is to give them what they want, not take away transaction-scoped storage for fear they will use it instead. They are adults capable of making their own tradeoff decisions and ending up with bugs as a result.

There is a strong reason to use associative addressing in Ethereum, namely cost alignment. Computation in the EVM is expensive. Having contracts reimplement in EVM associative maps when the Go code has them much more efficiently is a mistake imho. We should expose operations that are useful with costs reflective of the actual implementation cost+evaluation costs, rather than require a number of expensive operation invocations to achieve the same result.

I also think associative addressing avoids difficult allocation and reallocation problems that linear memory with its difficult sizes and costs for range spanned invites programs to run into. It’s just easier for everyone.

1 Like

Can we get an update on both the client/EVM testing efforts and the DoS concerns that have been raised previously?

I mentioned it on Twitter just now (literally a minute ago, so don’t expect replies yet), but will leave the link in case anyone answers on there: https://twitter.com/hudsonjameson/status/1602366049911017496?s=20&t=6Dd1J9kgY8fBI2aEfOK8GA

@holiman: Expressing DoS concerns in April: Shanghai Planning · Issue #450 · ethereum/pm · GitHub

@moodysalem’s open PR with tests (seemingly just manual client tests, but has a few mention of DoS stuff): https://github.com/ethereum/tests/pull/1091

1 Like

The semantics of transient storage opcodes as proposed in their current state don’t do anything that the existing storage opcodes don’t already do in terms of node memory usage. They’re both: bound to their respective accounts, persist across successful calls and lead to O(n) effort upon reverts. The main difference is that transient storage has a lower upfront gas cost due to it not needing to read / write to permanent storage.

This means that unlike storage there can be a larger set of changes that may need to be reverted in total (the “journal”). However this effort grows proportionally to the total amount of TSTOREs possible in one transaction, meaning it is / should be priced into the opcode. Based on the threads you shared this seems to be the main root of uncertainty around whether / not EIP1153 could be a DoS vector.

If it does look like TSTORE is priced too low at 100 gas then arguably the SSTORE opcode’s warm, dirty price should arguably also be changed.

1 Like

With how storage reverts are currently implemented in geth, any DoS issue that exists for TSTORE will also exist for SSTORE. However there is no DoS issue with geth. This is covered in the EIP text.

There is also a test specifically for the worst case O(N) revert scenario in the etheruem/tests PR, which is run against all clients. We are pretty certain that there is no DoS issue, but having this on a multi-client testnet will allow us to further verify. The EIP is blocked from merging into 2/5 clients because it’s not yet included in a HF.

Documenting another use case for transient storage I stumbled upon: Add generic parameter to IBlockhashOracle interface · Issue #15 · paradigmxyz/zk-eth-rng · GitHub

This is similar to the fourth use case in the latest draft of the EIP:

  1. Fee-on-transfer contracts: pay a fee to a token contract to unlock transfers for the duration of a transaction

More generally it might be stated:

  1. Unlocking actions within the same transaction: a fee-on-transfer token contract might require a fee to be paid before unlocking a certain amount of token transfers, or a specific implementation of an oracle interface might require a proof to be submitted before a value can be read

Would appreciate feedback

Hey, I am not sure i understand how using something like the weiroll VM would introduce security concerns? Couldn’t you also just write an off-chain DSL for the calldata and map it however you like? We built something like the Weiroll VM for our needs of transient storage at Primitive and are happy with how it works. In fact because it is a FSM we can reason about it’s correctness much more powerfully than we would if it was a new opcode.

For Reference: GitHub - primitivefinance/portfolio: On-chain portfolio protocol for risk and liquidity management. and the FVM.sol is the file where make our own vm to handle these challenges.

Hey all, I’m the cofounder of Goldfinch. I wanted to share a potential use case that I think could be used to really open up the smart contract architectural design space, but which I don’t think I see in the EIP. I apologize if this has been discussed above.

The case I’m envisioning is a group of smart contracts that make up an on-chain app being able to work together more seamlessly, by storing certain user or transaction level data up front, and then downstream contracts being able to access it, knowing it’s correct. Sort of like a global request object might be used in a traditional web app. This pattern could allow for key data to be shared across an app’s network of smart contracts, allowing for the modularity and limitless size of the diamond pattern, but without needing any solidity tricks or complexities, as well as better permissioning across contracts

For example, imagine contract A is called, and msg.sender = 0xABC. Then contract A calls out to contract B in order to do some calculation. Contract B may need to verify that the original msg.sender is in fact 0xABC. But there’s no good way to do this in current Solidity. You could use tx.origin, but that has security issues from phishing, and is therefore frowned upon now. You could pass msg.sender to every single downstream function or contract, but that is pretty gross and complicates your functions.

The other option is to create a global config contract that all your other contracts have access to. Indeed, we tried this for a while with our smart contracts. The issue with this approach though is that all of your functions now require a storage slot to be written, and thus you can’t really have view functions. Which breaks a lot of things, and loses your ability to make those guarantees to others.

What I’m hoping is that if Transient Storage is guaranteed to be thrown away at the end of a transaction, then the compiler could still deem such a function to be a view function. Is that the case with the transient storage? I did not see any direct discussion of how transient storage would interact with view functions either here or in the EIP. What is the story there?

Thanks! - Blake

Currently, the rules indicate that reading or tload is view and writing or tstore is the default, i.e., neither pure, nor view.

The write cannot be made view unless the interaction with staticcall is changed. Currently staticcall + tstore reverts according to the spec.

I see. I am not deep into the implementation to understand the implications of changing the interaction with staticcall. But has there been a discussion elsewhere about this? Would there be significant downsides to changing this behavior, such that a tstore at least could be considered a view, assuming the compiler could guarantee that the slot is returned to zero at the end?

As mentioned in EIP-1153: Transient storage opcodes Specification

TLOAD pops one 32-byte word from the top of the stack, treats this value as the address, fetches 32-byte word from the transient storage at that address, and pops the value on top of the stack.

In the context of stack, the use of word pop may be ambiguous. It would be better to replace it with the word put.

1 Like

Fwiw, this is a EVM feature we would surely benefit from in Superfluid protocol.

Currently, in order to keep some information between several external calls, we pass a bytes memory ctx that is secured by a “bytes32 stamp” around.

Hi everyone, I hope that this is the right place to post this.

We would like to propose two update options for EIP-1153.

Motivation for the changes

The current version’s description, motivation, and reference implementation section state that “transient storage must behave identically to storage”. However, the specification lacks a "not allowed in low gasleft state” requirement as in EIP-2200 which breaks the equivalence of TSTORE and SSTORE semantics.

That allows for reentrancies with 2300 gas left, e.g. Solidity’s transfer and vyper’s send functions are no longer reentrancy-safe. That can also affect contracts with transient storage but without the usage of such functions. Namely, by breaking the assumed trust model of contracts it interacts with (e.g. an exchange). Existing contracts, however, are not affected, but need to be careful when interacting with new contracts.

Option 1: Enforce a gas left requirement

  • Suggested Change: Add a sentence similar as in EIP-2200. “TSTORE is not allowed in low gasleft state to keep the equivalence with SSTORE.” Additionally, update the reference implementation.
  • Rationale: Prevent breaking assumptions on reentrancy in low gasleft state - which is implemented mainly by Solidity’s transfer() and Vyper’s send().
  • Implementation changes: Expected to be minor.

Option 2: Clarify that it is not identical

  • Suggested Change: Replace “identically” with “similarly”. Clarify the differences in (at least) the specification section. Further, add a paragraph dedicated to the application layer in the security considerations section.
  • Rationale: Move away from these assumptions. Raising awareness of the differences and letting developers decide on how to handle them.
  • Implementation changes: None.

Please let us know if we should further clarify, and if we should write a PR after the discussion.
Feel free to read the related blog post for more details and see our repository with the examples. Or reach out to me or @hritzdorf.