Notes from the EVM Evolution session at the 2018 Berlin Council. I had to type pretty quickly; apologies if any points are missing (please leave a comment if this is the case). Since the conversation jumped back and forth a bit, these are organized by topic, not chronologically. Thanks everyone for participating and making this a great session!
History
256-bit native is weird
Lots of crytpography relies on large numbers
Many people questioning why this design decision was made
Why single fixed sized? Something like the CLR or JVM
Only one VM expert early on, none at the beginning
Looked like a simple “classroom exercise”
Resources now going into WASM instead of EVM 1.5
EIPs for subroutines, deprecate JUMP, vector arbitrary access
EVM
Old EVM contracts must always be available, esp since they can create more contracts for all time
1.5 Roadmap
Discontinued / deprecated
Subroutines, static jumps, unfinished, Sol not mature enough to test
Vector processing complete, but untested w/ Sol
Quasi-official statement: eWASM is on roadmap, in par with Capser/sharding
Client devs frustrated because effort wasted (ex. previous Casper spec)
Register too wide (256 bits), storage too narrow (256-bits)
Why not pay for pages?
How about immutable data structures?
Probably designed this way because of simplicity rather than real world
These changes may make existing contracts less efficient
How realistic to get this on mainnet?
No resources allocated for paid work
Probably worth pushing forward to core devs
A client team would need to champion it
Architecture
Stack machines perform very well on wide words
Register machines better with narrow words
Arbitrary with a good compiler
Initial EVM design used stack for working memory, Sol uses as call stack
Sol has to do weird stuff because no subroutine instructions
Contracts don’t live long enough to really need freeing refs
Also, what considerations are there for the impact of inclusion of eWASM into the Ethereum 2.0 roadmap, since existing compilers and toolsets will have to be substantially rewritten to accommodate the new design? Will there be a release of the eWASM work prior to Ethereum 2.0 such that this work can be done in preparation for release of the upgraded network?
In the eWasm standup there was discussion of parallel, coordinated development of 1.5 and eWasm. From memory (there are notes somewhere)
Delays in the Cheshire Casper mean the current EVM must live even longer than anticipated. That leaves users with an incomplete, formally intractable machine. Formally intractable means it’s hard to prove properties like “will not vaporize a million ETH.”
More progress on EMV1.5 has been made than people realize, with much of the eWasm work like Iulia able to generate 1.5 byecodes, and many 1.5 byte codes implemented.
eWasm is in some ways more experimental and requires more resources. E.g. it requires compilers to achieve performance goals, whereas the first phase of EVM 1.5 (EIP-615) only requires interpreter extensions.
eWasm experiments fairly naturally fit with casper/sharding experiments, whereas EVM1 evolution fits fairly naturally on the main chain. eWasm can start moving to the main chain when stable.
Transpilers and K specs can keep the two from conflicting.
Alternatively, eWasm can become a shard-only VM, with EVM remaining the mainchain VM.
I’m not the expert, but I don’t think shasper requires any changes to the mainchain.
It’s been a while but my interpretation of that line item is as follows. Today, if you want to build a new Ethereum client from scratch, you have to implement a bunch of things including precompiles in trusted, native code. In an Ewasm-enabled world, a lot of this logic can live on chain inside regular contracts running safely inside the VM. In other words, they no longer need to be trusted code. Over time, more bits and bobs of the codebase could be migrated on chain in an Ewasm-enabled world.
I took the original comment to mean that WASM had a lower attack surface than EVM. I was thinking about DoS hardenness of the two options.
The correct interpretation, as you mentioned, is that eWASM can eliminate our dependance on precompiles, which is an attack surface of the current design (computations whose true cost isn’t well reflected).
(Hello Magicians. Porting this over from twitter, commenting on the Working Group Proposal; thanks Lane)
Regarding security…does the existence of two parallel VM’s introduce systemic risk? The increased complexity of having separate VM’s, executing separate instruction sets, seems to increase the attack surface. In other words, the security benefits gained from eliminating dependance on precompiles might be countered by the resulting complexity stemming from the above proposal. Or can those risks be managed? And loosely speaking, how would you go about testing this setup? Could you throw fuzzers at it, similar to how Geth/Parity is tested?
On another note, it seems like this would make life a bit easier for end-user DApp developers. If so, that’s a nice win : )
I think Lane just remembered to come back and answer some of these questions from the session in Prague - but all good as long as questions get answered!
@boris is right, sorry to misdirect you @seven7hwave, let’s take the discussion to that other, more recent thread? The conversation @fubuloubu and I were just having here is relevant, too, though.