I wasn’t aware they existed in Solidity, but I would hesitate to add the complexity.
I wasn’t aware they existed in Solidity, but I would hesitate to add the complexity.
Re-entrance locks exist in Solidity as a pattern (modifier), not as a built-in feature. Nevertheless quite simple and easy to use. For example, this one.
Ah, that’s what I thought. We were proposing it as a feature.
Sorry, this wasn’t really a very clear description from me. What I meant to say is that it would likely require a change to consensus data structures (specifically, the accounts struct) to record the version. The version opcode I was referring to would avoid the need for that, though.
Can you give an example?
RE locks, personally I believe these are a code smell; I’ve yet to see a contract designed with locks that couldn’t be rewritten to be safe without them. I really think they’re a bandaid developers will use to avoid having to reason about how their code works properly, and will encourage bad development practice.
That said, 1283 would have made them more affordable to use.
This might have belonged here better.
My main point of this post is discussing what we find OK and what not. I can create a contract which changes behavior if an unoccupied opcode in the past is occupied at a fork. Do we now assign all these opcodes as INVALID because a single contract suddenly changes behavior? Probably not. But what if there is 10% of all Ether in it? Can I hold the network hostage against forks via this?
Constantinople is delayed because possibly many contracts are affected. But what is the minimum amount of affection a fork may have to delay the fork?
@Arachnid, let me provide just a quick idea.
PaymentSharer example provided by ChainSecurity.
But instead of using it directly, lets create a new
GeneralProxy to the initially deployed
PaymentSharer instead of redeploying it. Proxy will reuse its logic (EVM_v1), but with own storage (EVM_v2) with cheap SSTORE.
Moreover, a more complex dispatching Proxy can redirect to different contracts deployed to different EVM versions. In which EVM version should Proxy operate?
May be we will have similar challenges using libraries.
I like the idea of EVM versioning, but it requires caution design through all the edge cases.
It’s hard to talk about immutability in a vacuum. While I look forward to Eth 2.0 being a fresh start, eventually that too will become bogged down with technical debt. There was some talk maybe a year ago about a multi-tiered system to satisfy both the risk-tolerant and the risk-averse (whether it’s different rules for different shards or something else). Of course this brings additional complexity, of which there’s already no shortage.
I still find it profoundly stupid to consider gas cost invariant (and judging by how little code this pricing change broke, maybe most developers agree?). Hardware and expenses associated with hardware change every year and as a result so do the relative costs of memory vs CPU vs storage usage. Maybe language tools can do more to prevent us from relying on gas cost for program behavior. I wish information about gas was completely inaccessible to contracts so that they would be unable to branch on it. I don’t want my program doing different things based on how much power it’s getting from the wall. It should either have enough gas to complete or not. Ideally gas costs should be market-driven in real time and I hope there’s a way to get there eventually.
I was hoping that this year Ethereum would scale 10x in terms of ops/s. It seems increasingly unlikely given how seriously we treat de facto invariants such as gas cost and how every time we fork/upgrade it’s like we’re defusing a nuclear weapon. Like everyone else, I want to have my cake and eat it too. Maybe this means focusing on Layer 2.
CPUs live in an adversarial environment as well. Bugs in their chips can break an unknowable number of programs and open an unknowable number of security holes. So adversaries are always looking for bugs. And as @jpitts @rajeevgopalakrishna point out, Intel takes backwards compatibility seriously. “We put the backwards in backwards-compatible.” The architecture of the original Intel hand calculator is still visible in their current chips, and the code for it still runs.
Whether gas should be immutable shouldn’t be a difficult question. That hand calculator had performance limits that are far below current chips. Should current chips be purposely hobbled to match it?
Just to be clear, I’m not suggesting it be invariant, just that, if we lower the gas cost of an opcode, we do it by introducing a new, cheaper version of the opcode. Or we use engine versioning, as discussed here (I like @arachnid’s proposal)–they achieve the same thing wrt gas pricing. Or maybe we need to think outside the box more and introduce multiple tiers, as you suggest–these could be shards, or they could even be at layer two. There’s something elegant about the idea of shards running different engine versions, since it could provide an economic incentive (cheaper gas) for developers to migrate contracts from older shards to newer ones. This is a step towards gas costs being market-driven as you suggest.
Guys, I think it is quite important topic worth of discussion at Magicians Council in Paris.
Who is interested to join the conversation there? Please raise your hand.
In order to get a time slot, we need to present enough people interested in the discussion.
Me, obviously haha!
I have heavily used re-entrance locks protecting functions in case of even smallest possibility for re-entrance. Tried to avoid any assumptions about external code execution.
What is your suggestion or pattern?
Call external code after making all state changes when practical. When not, determine what your invariants are, and ensure they always hold any time you call out to external code.
I don’t think the problem is that lowering (or changing) costs is dangerous per se. The EIP-1283 bug involved subtle assumptions about specific gas costs that were commonly used for a particular purpose. I actually don’t expect there are very many of those, and lots of other programs could be written whose behavior would change if certain gas costs got lower with no complaints at all–they would just be able to do more of what they do before running out of gas. Which is the whole idea of gas.
So, adding the complexity and working out out all the edge cases of a new versioning, tiering system or context passing system, or of offering up whole new sets of replacement opcodes, (e.g. all of the arithmetic opcodes) before we can change the gas cost of opcodes that are overpriced? That all sounds like jumping out of the frying pan and into the fire. We need to do some sort of versioning at some point, but not so that programs can learn which gas price regime they are running under. I think it just needs to be made clear that gas prices are subject to change without notice.
I totally agree. Also, there’s a historic precedent to this, so people/code should not make assumptions about particular gas costs. Doing so means that the dev has gone off the trail, like doing some evm experimentation with assembly.
If, OTOH, we find that solidity has some implicit assumptions about gascost, then we should try to respect that (IIRC, there were some early assumptions about the gascosts when using the
IDENTITY precompile, which we had to very carefully tread around when we changed how the 63/64ths rule worked)
This is probably the best take. The problem wasn’t that lowering the gas cost broke a user assumption. The problem is that assumption was there in the first place.
Right, but where do we draw the line going forward? Are we comfortable changing gas costs? Then why weren’t we comfortable doing it in this case and how will it be different next time? How do we communicate this to developers and make sure they factor this in so that future changes don’t break
“invariants” that they should not be relying on?
A few things, I guess:
Agree with the rationale expressed by @gcolvin & @holiman and the questions/suggestions from @lrettig & @fubuloubu. We cannot prevent the creativity of developers (if inline-assembly is supported by a language, it is fair game) but only anticipate them, and hence establish well documented guard-rails on invariants/assumptions and any guarantees on backward-compatibility/interoperability going forward. This is going to be even more critical with all the upcoming changes, such as ewasm.