Immutables, invariants, and upgradability

I can understand your point about multi-dimensional gas if you are talking about opcodes sampling. Yes, the sampled gas cost for some opcode is a “sum” of “orthogonal costs of bandwidth, storage, compute, etc”.

But once the “combined” gas cost is sampled into single number, I don’t see any reason for user to split it back into dimensions. Multi-dimensional gas should imply multidimensional gas price, but I don’t see who will need it. What should a user express by setting a network gas price higher than computational gas price? Unclear to me.

@lrettig, shouldn’t we better extend the topic to “Immutables, invariants and upgradability”?
Objects are: smart contracts, EVM and social contract around it.

currently we have it fragmented:

  • there are works on contract upgradability,
  • there are discussions on EVM upgradability,
  • social contract upgrades (like gas cost changes) are even not in discussion yet

I think all this stuff is in the same domain and tightly coupled.
It is really worth of thoughtful research and specification.

1 Like

Back to your original question:

Do you think there will be no need to tune single opcode’s cost in the future even if the hardware will change significant?

You’re right that multidimensional gas cost does not really help address this question. I think, yes, we probably do want to/need to be able to tune an opcode’s gas cost in the future. There are two ways we could tune:

  • Up, in case it’s too low, which would probably only happen to mitigate a DoS attack, which I would consider an emergency, and which in any case would definitely not increase the risk of re-entrancy
  • Down, in which case we might introduce a new, cheaper version of the opcode, or alternatively a new EVM version with a cheaper opcode

Agree, good point, will update the subject

I meant re-entrance locks. Unsure whether you mean the same with recursion locks.
Re-entrance locks are simple and intuitive in solidity, although quite expensive (exact this issue was targeted by EIP-1283).
I am wondering what do you mean by “too complex / a lot of overhead” in Viper exactly?

Copying over some relevant posts on this topic from the other thread:

CC @mandrigin, @Arachnid, @rajeevgopalakrishna

1 Like

No. It introduces a condition without considering which nearly all currently-deployed contracts were written. This is exactly against opposite the rest of your post.

Operation of all contracts will have to be re-considered, and many (most?) will need to be rewritten, to answer a new question: “Who pays the rent?”

Either this, or some form of special-casing is introduced for “pre-rent” contracts; this increases system complexity a little, and incentivises “state hoarding” up until the feature is enabled (like described in this post). The latter can (probably) be worked around, but then it increases complexity greatly.


The problem is, we can’t reasonably expect both decentralisation and pay-once general storage.

Sharding (at best) delays this, and (at worst) allows a much more rapid growth.

Personally, I would much rather see exodus from the (future) Ethereum 1.x shard into rent-enabled shards, rather than the same free-for-all. At least if we’re to expect people to run PoS-enabled clients on their laptops. (Replace “shard” with “side-chain” if needed.)

Whether rent should be enabled on 1.x is (still) an open question, IMO. But it should be, eventually, somewhere. [All] costs should be internalised, otherwise the protocol will suffer a “tragedy of the commons”.

2 Likes

Same thing. I call it a “mutal recursion” issue since we protect against recursion internally (a Vyper contract cannot recursively call itself)

We were brainstorming a way to do it behind the scenes, basically some sort of bloom filter mechanism that would be efficient enough in practice (1 word per contract). Decided against it. The alternative is to track the call addresses explicitly per call, which would be very expensive.

1 Like

Technical note:
Could they be EVM0.1, EVM1.0, EVM1.1, etc?

Those already exist: https://github.com/ethereum/py-evm/tree/master/eth/vm/forks

1 Like

It is true that a key idea of first principles thinking is to not reason based on analogy.

Different principles and priorities must be applied given that this is on a live network and all operations are costed out. Still, it is important to learn from how the PC and other platforms evolved, what principles they adopted for upgradability, and how they survived against competing ecosystems.

Every platform I have developed for or deployed, and every device we use, has defined certain known points of stability over time, but eventually most apps will break… because the platform must move forward or die. The saving grace is being able to quickly understand the context in which an app is running, breaking, potentially becoming insecure if deployed on a newer version of the platform.

In “tech talks”, conferences, network upgrades, and in this upgradability discussion, I sense that we are getting beyond copying industry and into understanding why they do what they do.

What industry players to do maintain stability for their developers:

  • Maintain an up-to-date specification that captures the current, full system. Clearly number & describe the milestone releases and the updates within those releases.
  • Delineate the key parts of the system, their versions, and how they fit together into a milestone release of the platform.
  • Use concise language for the categories of expected behavior for developers deploying apps targeting a certain milestone release (microprocessors and other hardware developers call them “series”). A release isn’t just a set of new features described in specs.
  • Point to implications, areas of risk due to other parts of the platform changing for developers in a given milestone release
  • Clearly describe policies around “what is supported” e.g. TLS, STS. We must find a way to position this social contract in our decentralized situation, and establish what should and can feasibly be guaranteed.

Are there other ways of “platforms communicating with their devs” that I am missing?

3 Likes

Why you haven’t used locks in storage like in solidity? Are we talking about locks programmable by devs or built-in locks provided by language to any function?

Built-in.

I wasn’t aware they existed in Solidity, but I would hesitate to add the complexity.

Re-entrance locks exist in Solidity as a pattern (modifier), not as a built-in feature. Nevertheless quite simple and easy to use. For example, this one.

1 Like

Ah, that’s what I thought. We were proposing it as a feature.

Sorry, this wasn’t really a very clear description from me. What I meant to say is that it would likely require a change to consensus data structures (specifically, the accounts struct) to record the version. The version opcode I was referring to would avoid the need for that, though.

Can you give an example?

RE locks, personally I believe these are a code smell; I’ve yet to see a contract designed with locks that couldn’t be rewritten to be safe without them. I really think they’re a bandaid developers will use to avoid having to reason about how their code works properly, and will encourage bad development practice.

That said, 1283 would have made them more affordable to use.

2 Likes

This might have belonged here better.

My main point of this post is discussing what we find OK and what not. I can create a contract which changes behavior if an unoccupied opcode in the past is occupied at a fork. Do we now assign all these opcodes as INVALID because a single contract suddenly changes behavior? Probably not. But what if there is 10% of all Ether in it? Can I hold the network hostage against forks via this?

Constantinople is delayed because possibly many contracts are affected. But what is the minimum amount of affection a fork may have to delay the fork?

@Arachnid, let me provide just a quick idea.

Consider PaymentSharer example provided by ChainSecurity.
But instead of using it directly, lets create a new GeneralProxy to the initially deployed PaymentSharer instead of redeploying it. Proxy will reuse its logic (EVM_v1), but with own storage (EVM_v2) with cheap SSTORE.

Moreover, a more complex dispatching Proxy can redirect to different contracts deployed to different EVM versions. In which EVM version should Proxy operate?

May be we will have similar challenges using libraries.

I like the idea of EVM versioning, but it requires caution design through all the edge cases.

It’s hard to talk about immutability in a vacuum. While I look forward to Eth 2.0 being a fresh start, eventually that too will become bogged down with technical debt. There was some talk maybe a year ago about a multi-tiered system to satisfy both the risk-tolerant and the risk-averse (whether it’s different rules for different shards or something else). Of course this brings additional complexity, of which there’s already no shortage.

I still find it profoundly stupid to consider gas cost invariant (and judging by how little code this pricing change broke, maybe most developers agree?). Hardware and expenses associated with hardware change every year and as a result so do the relative costs of memory vs CPU vs storage usage. Maybe language tools can do more to prevent us from relying on gas cost for program behavior. I wish information about gas was completely inaccessible to contracts so that they would be unable to branch on it. I don’t want my program doing different things based on how much power it’s getting from the wall. It should either have enough gas to complete or not. Ideally gas costs should be market-driven in real time and I hope there’s a way to get there eventually.

I was hoping that this year Ethereum would scale 10x in terms of ops/s. It seems increasingly unlikely given how seriously we treat de facto invariants such as gas cost and how every time we fork/upgrade it’s like we’re defusing a nuclear weapon. Like everyone else, I want to have my cake and eat it too. Maybe this means focusing on Layer 2.

CPUs live in an adversarial environment as well. Bugs in their chips can break an unknowable number of programs and open an unknowable number of security holes. So adversaries are always looking for bugs. And as @jpitts @rajeevgopalakrishna point out, Intel takes backwards compatibility seriously. “We put the backwards in backwards-compatible.” The architecture of the original Intel hand calculator is still visible in their current chips, and the code for it still runs.

Whether gas should be immutable shouldn’t be a difficult question. That hand calculator had performance limits that are far below current chips. Should current chips be purposely hobbled to match it?

2 Likes