EIP-1153: Transient storage opcodes

Is everyone aware of the potential problems, new complexity etc and are OK with that, or just interested in gas savings? Ofc I can’t say for sure either way, and won’t assume, but tweets that likely didn’t warrant the proper diligence rather sounds like the latter. Also the software like clients and compilers adding code support to them doesn’t mean support for it getting in, as can be seen with the Solidity PR also being there.

Huh? Why is that?

With all the deserved due respect to Nick and Frangio, screenshots of texts and single sentences of the form “sounds simple” hardly show diligence.

I’d for example be interested in the position of people working with formal EVM semantics, like e.g. Runtime Verification, or such. Carrying another kind of state with different semantics through calls of course adds to complexity there. Be it manual or not, it basically doubles the effort to prove invariants of a smart contract, since they’re contingent on the position in a transaction call chain. Is all of that managable? Theoretically, sure. Is it trivial? Depending on what you do maybe, maybe very much not. Is that enough reason for being sceptical about this EIP? For me it is, in general it may, but doesn’t have to be.

But anyways, I actually don’t quite understand the aggressiveness. Of course, I’m aware of this having the support of application developers - but that doesn’t mean that it has to universally be considered as a good move (especially in any fixed particular variant).

As for memory mappings: I’ve even seen this mentioned as use case and I think you’re underestimating the cost of iterating lists. I don’t see the cost as the limiting factor here, until it’s basically back at a cold storage read of a zero - a reduction of the address space would be, either for the opcodes or with the EOF-based marking of storage slots as alternatives.
Storage repricing and refund adjustments are also still alternatives that have come up often. Which are complex and not nice, sure, but for modelling storage not merely as disk space, but as cached disk space, which accurately reflects what happens in clients, I don’t see them going anywhere regardless. Would also be interested how this could no longer or better be done with Verkle trees.

In any case, I’m mainly pointing out that this EIP doesn’t only have fans (and the response IMHO actually reaffirms that this is more than necessary). It’s not like you will and have to have unanimous support in the end, but as I said originally, I hope the decision takes the price of this in terms of complexity into account (which surely is unarguably not just zero :-)).

To clarify: This by no means intends to disqualify anyone’s opinions. Everyone on the “support” list is ofc 100% knowledgeable and capable of judging this. However I do believe that scattered support like screenshots and tweets are different from structured support, but that’s just my opinion. I’m completely fine with being on the “wrong” side here :slight_smile:

Probably closer to 10 than 50, but when was the last time you saw the need for an in-memory map in solidity? Particularly one with any size? I don’t expect this to practically be used solely for in-memory maps, but regardless you can always clear the entries in the map before the end of the call.

You can’t reprice out a wasted storage load for a value that is always known to be 0 at the beginning of a transaction. @Philogy explains this rationale a few posts above.

If you’re marking some storage slots as transient via EOF, what is the practical difference? It seems to have all the same issues, except you can’t tell from the opcode alone if it’s a transient slot or not, so it seems even harder to analyze. Edit: furthermore, you cannot have mappings with keys that are determined at runtime (e.g. token addresses).

Storage refunds may stick around for original-new-original writes, but not necessarily zero-nonzero-zero writes if transient storage is available. This simplification alone would make storage refunds much easier to talk about since it removes a branching condition. Alternately, with transient storage, SSTORE/SLOAD pricing could be made completely dumb about caching and the contract can use a TSTORE/TLOAD mapping to move it to the application layer.

As for storage pricing and refunds: the main point there is that you’re left with one cold load no matter what refunds and whatnot, right? Can a cold load from a full zero page in verkle trees be cheaper as well, hence allowing for reserving full pages as transient storage? Honestly, I have no idea, maybe not. Maybe even if, it’s messy and not a good idea to go that way. But that’s not my area of expertise. But the advantage of this direction is that the clearing is fully explicit and the EVM semantics stays simpler and there’s no implications for composability as Chris hinted at. (Of course at the cost of more complex gas accounting no matter what, but that complexity is actually less relevant for analysis and FV, since you can usually assume to have enough gas for that purpose anyways)

As for memory mappings: A full loop iteration for iterating a list I’d guess at 10-20 gas, so worst case you’re at 100 with 5 to 10 elements, average case 10 to 20 - that’s not a long way. (Granted that’s only comparing reading the thing, and writing to it before would be quite more costly in a transient storage version as well) But anyways, of course you don’t see memory maps around now, since they’re costly and a pain to implement right now - which may change with the ability of abusing transient storage for them, that’s the main point :-). And if you did that, clearing the map is actually the costly part (since then you need to keep and traverse a list of keys), so you’re not unlikely to not do that. (and I mean, implementing cheap memory mappings in actual memory has been requested from us, it’s just not feasible with current memory design, but it’s not too crazy of a concept in general)

Maybe that concern is ultimately unwarranted, but I’d maintain that it’s a valid concern in any case.

To avoid this issue alone, restricting the address space would help. I.e. if I just don’t have enough transient storage available (either by restricting the address range of transient storage explicitly or implicitly by it only being possible to mark a low count of storage slots as transient via EOF), I can’t abuse it in this way. On the other hand, one of the use cases I’ve read also appears to be passing complex data structures like mappings through calls, it’s probably impossible to keep that and prevent the abuse as call-local mappings at the same time (personally, I’d still argue that passing data around like that would be better done with more flexible calldata in principle, but anyways).

What marking slots in EOF indeed doesn’t account for either is the increase in semantic complexity, that stays the same and is a clear gaping con of transient storage in general. It may be valid to conclude that the pros outweigh this - I’m personally not convinced by that, but I can relate to the opposite position. The fact that this doesn’t even occur in the list of relative cons of the EIP and I don’t exactly see it conceded as a valid concern, made me worry if it is even properly weighed in at all, though. (Also not entirely sure if people doing FV would appreciate a position of basically “FV is too hard anyways, so it doesn’t matter if it gets even harder” ;-))

But yeah, wrt memory mappings I guess you can get away with “well, then people just shouldn’t be doing that” and maybe that’s fair enough.

Wrt static analysis, auditing and FV, I’d at least want to make sure that this is given sufficient thought, since transient storage will make things more complex - that can’t just be denied entirely, can it? If that’s generally deemed worthwhile, that doesn’t make me happy, but is also fair - it just should be properly considered at all IMHO.

These two posts basically sum up my feelings on the issue. The goal here is to provide a fundamentally different memory region, with different scoping, longevity, and pricing than any existing memory region? Then it should be a separate memory region, rather than leaving it up to the clients to cache smarter.

I am fairly confident this will not be hard to implement in KEVM (client that RV maintains), and I don’t think it will increase the complexity of verification using KEVM significantly. I can’t speak for other tools. I think with good usage of this feature, it may even reduce the complexity of some verification efforts (many variables you’ll be able to tell immediately that they cannot alias, for example, or if an entire modifier only uses transient variables, maybe we can have modular verification of the modifier more easily?).

It seems to me as well that it is not too much complexity for clients, because several clients have been modified to handle this new opcode, and tests have been provided (though maybe this could have been done earlier in the discussion, I know that having tests increases my confidence quite a bit: EIP-1153: Transient Storage tests by moodysalem · Pull Request #1091 · ethereum/tests · GitHub).

That being said, I cannot speak for other tools. Our semantics and verification is based on symbolic execution, which is different than other tools. I also can’t speak for the Solidity compiler, but does the Solidity compiler need to support the feature immediately? Can we let devs use inline assembly, let a few examples of how it’s being used trickle in, then give people the version of the feature that has compiler-guaranteed guardrails in place?

The first users of the feature, whether via the Solidity compiler or not, are going to be taking the brunt of risk here (risk of not understanding the new feature correctly, or risk of Solidity compiler behaving unexpectedly on the new feature). I am usually a fan of the “give the devs tools, and let them figure out how to not shoot themselves in the foot” approach. I do think that the devs opinions here are more valuable than my own opinion, they are the ones trying to innovate here.

Also shoutout to @pcaversaccio for the diagram, I find these types of visualizations very helpful.

3 Likes

Ok, fair enough. And sure, we can provide plain assembly support immediately, properly optimizing may take a bit longer, high-level language support a bit longer still, but we can manage, I’m not so much concerned with that, but with the complexity of the language semantics that inherits the increase of the complexity of the EVM semantics.
But if this is deemed a non-concern, I consider myself beat on this.

I do think it will be easy to make pathological hard-to-analyze code here.

But I don’t think these types of examples are what people will be trying to do formal verification on, and I think you could make the same or similar examples using normal storage.

Double-edged sword I guess.

2 Likes

If the TSTORE opcode is called within the context of a STATICCALL, the call must revert.

This is different wording than how STATICCALL handles writes in static contexts as defined in EIP-214. Is the behaviour intended to be different?

1 Like

It’s not intended to be different, and I can adjust to this if it sounds more accurate:

If the TSTORE opcode is called within the context of a STATICCALL, it will result in an exception instead of performing the modification.

I’d certainly appreciate it, but I’m also a pedant ;3

1 Like

I don’t think the language semantics get much messier: this storage behaves the same as indexing a variable by the transaction. Now the transaction hasn’t appeared before, so symbolic analysis based on model exploration will have to change, but it doesn’t seem to me to be that bad.

1 Like

PEEPanEIP #91: EIP-1153: Transient storage opcodes with @moodysalem

2 Likes

So I spoke about this at some length with @moodysalem this week and I want to say I do support this proposal in principle. Here’s why: since transactions are the unit of atomicity in the EVM, it makes sense to have a data location which is also transaction scoped. It allows the developer to “reason about transaction atomicity”, which, as demonstrated by both the existence of reentrancy bugs and techniques to deal with them, is an extremely useful thing to be able to reason about. (In fact, one could argue that memory should have been transaction-scoped to begin with, although it’s a bit late for that).

By way of example, another use-case this enables is “critical sections” - scoped sections of code which, while entered, do not allow reentrancy into the contract at all (via checking a transient storage slot before entering the selector table). This is possible with regular storage of course, but it incurs the cost of an SLOAD at every single call to the contract.

So if you think of transient storage as a tool for reasoning about transactions, transient storage not clearing after every call might be a feature, not a bug, since you can trace information about a txn (ex. how many times a contract has been called in a particular txn – which, if I am not mistaken, is not currently possible with existing opcodes in the EVM). I do think that the concerns voiced about making it potentially harder to reason about contracts are valid! But maybe the complexity is a basic complexity of smart contract development that needs to be reasoned about anyway, and by adding this data location to the EVM we are just making it explicit.

I do have the issues with the API that I voiced above. I also think “transient storage” is a confusing name, since the scope of the data is much more like memory than storage as far as most programmers would be concerned - the whole point of the proposal is that data is never “stored” to disk. A better name might be “long-lived memory” or simply “transaction-scoped memory”. Lastly, I am unconvinced that this proposal is strictly better than other proposals which provide some sort of transaction scoping, for instance the TXID proposal from Nikolai (rest in peace). I haven’t considered the alternatives long enough. But this proposal may indeed be the happy middle ground in terms of usability and the use-cases it enables.

As far as language implementation goes, Vyper team is happy to support it at the language level. As has been pointed out, our existing implementation is a PoC. In principle, it works! And you can probably use it to try out the feature! But we will not officially release as a language feature or put the level of effort necessary for production until the EIP is scheduled for a fork.

2 Likes

I think using transient storage for “memory” mappings just because it has different addressing semantics from actual memory is an anti-pattern that can and will lead to the type of bugs that several people have raised concerns about in this thread. As I mentioned before, the lack of memory mappings in SC languages so far is a language restriction, not a VM restriction. You can see that C, C++, Python, Java, Rust, etc., all manage to implement map data structures with linear, not associative memory. As a language implementer myself - I would prefer that transient storage addresses the same way as memory, and to provide memory mappings as a language feature instead of having people fall back to transient storage mappings. Ultimately - transient storage should be used for things that require transient storage, and memory should be used for things that require memory. I think here is where the abstraction might leak, if developers are reaching for transient storage because of its addressing semantics instead of its volatility semantics.

1 Like

I’m not sure I really follow what these two messages say: I think the two points you want to make are that transient storage might not be the best name, and programmers reaching for this might be surprised by the semantics if they reach for it as an alternative to memory, so that we should change the transient storage semantics to match those of memory to avoid the temptation. I’d personally be happy to consider renaming to “transaction-scoped storage” if that conveys the intent better.

I don’t agree with changing the addressing model to match that of memory, especially not for the reason of mismatch leading people to misuse it. Fist compilers already have code generation that works with storage as addressed today, so it would be easy to take that code and have it output TSTORE TLOADs instead to interact with transient marked storage. Using something memorylike would be a higher lift, but I don’t maintain a compiler so happy to be corrected on this point. I do definitely want to be able to do storage like things like put structures in there. You might say in the future we can do it with memory-like addressing, but that’s not ready now.

I don’t think the temptation argument really works. If programmers want memory with storage-like addressing, the right solution is to give them what they want, not take away transaction-scoped storage for fear they will use it instead. They are adults capable of making their own tradeoff decisions and ending up with bugs as a result.

There is a strong reason to use associative addressing in Ethereum, namely cost alignment. Computation in the EVM is expensive. Having contracts reimplement in EVM associative maps when the Go code has them much more efficiently is a mistake imho. We should expose operations that are useful with costs reflective of the actual implementation cost+evaluation costs, rather than require a number of expensive operation invocations to achieve the same result.

I also think associative addressing avoids difficult allocation and reallocation problems that linear memory with its difficult sizes and costs for range spanned invites programs to run into. It’s just easier for everyone.

1 Like

Can we get an update on both the client/EVM testing efforts and the DoS concerns that have been raised previously?

I mentioned it on Twitter just now (literally a minute ago, so don’t expect replies yet), but will leave the link in case anyone answers on there: https://twitter.com/hudsonjameson/status/1602366049911017496?s=20&t=6Dd1J9kgY8fBI2aEfOK8GA

@holiman: Expressing DoS concerns in April: Shanghai Planning · Issue #450 · ethereum/pm · GitHub

@moodysalem’s open PR with tests (seemingly just manual client tests, but has a few mention of DoS stuff): https://github.com/ethereum/tests/pull/1091

1 Like

The semantics of transient storage opcodes as proposed in their current state don’t do anything that the existing storage opcodes don’t already do in terms of node memory usage. They’re both: bound to their respective accounts, persist across successful calls and lead to O(n) effort upon reverts. The main difference is that transient storage has a lower upfront gas cost due to it not needing to read / write to permanent storage.

This means that unlike storage there can be a larger set of changes that may need to be reverted in total (the “journal”). However this effort grows proportionally to the total amount of TSTOREs possible in one transaction, meaning it is / should be priced into the opcode. Based on the threads you shared this seems to be the main root of uncertainty around whether / not EIP1153 could be a DoS vector.

If it does look like TSTORE is priced too low at 100 gas then arguably the SSTORE opcode’s warm, dirty price should arguably also be changed.

1 Like

With how storage reverts are currently implemented in geth, any DoS issue that exists for TSTORE will also exist for SSTORE. However there is no DoS issue with geth. This is covered in the EIP text.

There is also a test specifically for the worst case O(N) revert scenario in the etheruem/tests PR, which is run against all clients. We are pretty certain that there is no DoS issue, but having this on a multi-client testnet will allow us to further verify. The EIP is blocked from merging into 2/5 clients because it’s not yet included in a HF.