Long-term L1 execution layer proposal: replace the EVM with RISC-V

@vbuterin this is an awesome idea.

Realizing the full potential of a global distributed computer is fundamentally constrained by Layer 1 transaction throughput. While L2 solutions are vital, enhancing core L1 performance remains crucial for broader adoption and further success.

A key advantage of RISC-V is its defined extensibility. We should investigate defining a set of custom RISC-V instructions specifically designed to accelerate core, performance-critical EVM opcodes.

RISC-V’s open nature permits specialized hardware implementations (ASICs, FPGAs) beyond generic CPU execution. This offers a path to significant L1 TPS improvements by accelerating core EVM logic directly in silicon, potentially orders of magnitude faster than current software interpretation or JIT approaches.

Verifiability & Security: The modularity and clean design of RISC-V lend themselves more readily to formal verification methods compared to complex legacy ISAs. A formally verified RISC-V core executing EVM logic could provide much stronger guarantees about runtime behavior, crucial for securing high-value smart contracts.

It would be great for the community to Initiate focused research tracks and working groups to:

  • Benchmark existing EVM implementations against potential RISC-V software models. @MASDXI - revive/PolkaVM looks great - it currently only targets RV32EM which is worth discussion.
  • Identify high-impact EVM operations suitable for custom RISC-V instruction acceleration.
  • Develop proof-of-concept RISC-V models (FPGA/emulation) with custom EVM extensions.
  • Engage with the RISC-V community on standardization potential for blockchain-specific extensions.
  • Evaluate the formal verification advantages and challenges.

RISC-V, potentially enhanced with custom EVM-centric instructions, offers a compelling path towards a more performant, secure, and scalable Layer 1 :+1:

Vitalik Buterin claims that replacing the Ethereum Virtual Machine (EVM) with RISC-V could improve zero-knowledge (ZK) proof efficiency by 50 to 100 times. However, is RISC-V truly superior? The EVM has been a stable, battle-tested environment for approximately nine years, while RISC-V lacks substantial real-world experience in blockchain execution contexts. Although PolkaVM has adopted RISC-V, I believe it has not been adequately validated, as it has yet to be thoroughly proven on a mainnet.

The EVM is specifically optimized for smart contract execution, whereas RISC-V, designed as a general-purpose architecture, may lack tailored optimizations for blockchain use cases. While RISC-V’s versatility allows the use of programming languages from other blockchains, Vitalik himself noted that improvements leveraging existing Solidity are preferable. Transitioning the entire ecosystem to a new architecture is a daunting challenge.

Implementing RISC-V in software inevitably leads to performance degradation. Using an emulator for software-based execution raises doubts about its ability to process tasks efficiently. On the other hand, adopting RISC-V hardware would entail significant transition costs. I believe that ZK-EVMs already provide sufficient efficiency for current needs. When considering the costs of development, the effort required for transition, and the potential for unforeseen errors, replacing the EVM with RISC-V does not seem like a compelling approach.

While transitioning to RISC-V may offer potential benefits, I argue that improving ZK-EVMs and optimizing the existing EVM are more practical and stable alternatives.

But… why? And this is genuine question that I don’t understand and need more context about.

The supposed answer is performance. But this until proven might be just a false promise. And if you are getting 100x-1000x improvements in benchmarks, you are clearly doing something wrong and the overhead should be avoidable at least to some extent.

There are multiple projects that try compiling EVM bytecode to native code (Revmc, Nethermind IL-EVM ect.), why not to build a compiler to risc-v that would give us the speedups we need while not needing a complete paradigm change?

The potential problem with that is that classic EVM bytecode is hard to introspect and thus hard to compile, but we have a solution just around the corner - EOF, which should make things much simpler.

Having abstract EVM that is easy to compile to other things, while laser focused on domain problems is probably better than using general purpose architecture. This is the foundation of Java, .Net or LLVM and works well for them, and with some effort it is not a performance bottleneck (for example check Nethermind client performance which is build on .Net).

Also no idea about the hardware that is coming, especially zk-circuts, but my bet is that RISC-V general purpose CPU’s will have a hard time catching up to x86 and ARM.

2 Likes

This proposal is preaty radical — but I believe it will carry the seeds of a renaissance for Ethereum. Native RISC-V execution could supercharge performance, enable more efficient ZK proofs, and better align with future hardware acceleration.

That said, several concerns give me pause:

  • The complexity of implementation could introduce new vulnerabilities and slow down core development.
  • A RISC-V shift raises the bar for low-level developers, potentially increasing the learning curve and reducing accessibility.
  • Without adoption from other EVM chains, we risk fragmenting the broader ecosystem.
  • Backward compatibility with the vast existing contract base may be difficult to guarantee.
  • And finally, coordinating such a transformation through governance and a potential hard fork is no small feat.

Hi. I’m the guy responsible for PolkaVM. Since I see some misinformation flying around here and there let me chime in with some details as to what PolkaVM is and how it works.

PolkaVM currently supports riscv64emac with the Zbb extension, but unlike most (all?) other RISC-V VMs it doesn’t run RISC-V binaries as-is (it’s not actually a RISC-V VM!). Offline we ingest normal RISC-V ELF binaries that you build using a normal compiler, and we translate them into a more constrained and efficient custom bytecode that’s designed for use in a VM (and not native hardware, like RISC-V is). The idea is to remove as much complexity out of the VM itself (which needs to run on-chain), put as much of that complexity as possible into offline tooling (which can run off-chain), and improve security by removing unnecessary features (e.g. in vanilla RISC-V you can jump to any address; in PolkaVM bytecode the code address space is fully virtualized, you can’t jump everywhere, and the bytecode isn’t even loaded into memory that’s accessible by the program).

Performance-wise we get very close to bare metal performance[1]; it’s as fast as wasmtime (which is the state-of-art WASM VM), but it guarantees O(n) recompilation and recompiles programs into native code hundreds times faster. To put this into perspective, it is faster to recompile a program from scratch starting with raw PolkaVM bytecode into the native code than it is to cache the recompilation artifacts and look them up based on their hash (in other words, recompiling the program is faster than calculating its hash), and this is without sacrificing runtime execution performance.

We mainly use RISC-V not because it’s a particularly good bytecode for a VM (it actually isn’t that great), but because it’s simple, well supported and it’s relatively easy to translate to something else, so we can get the best of both worlds - great software compatibility (you can use existing compilers and programming languages, e.g. the other day I ported Quake to PolkaVM[2][3]), but you also get the benefits of a custom, optimized bytecode (blazing fast compilation speed, near-native performance, simplicity and customizability).

Anyway, if you have any questions feel free to ask me anything.


Links: (New account so I’m not allowed to post links, so just replace “github” with github dot com)

[1] - github/paritytech/polkavm/blob/master/BENCHMARKS.md
[2] - github/paritytech/polkaports/tree/master/apps/quake
[3] - github/paritytech/polkavm/tree/master/examples/quake

6 Likes

Hi, I am one of the core developers of Cartesi’s VM. As someone actively working in a RISC-V VM for blockchain applications – the Cartesi Machine[1], which @GCdePaula kindly mentioned – I wanted to share some perspective supporting the exploration of RISC-V for Ethereum’s execution layer.

One of the biggest advantages I see in adopting RISC-V is the immediate access to mature tooling and ecosystems. Instead of building a completely bespoke environment, using RISC-V allows developers (and the core protocol) to tap into decades of work on compilers like GCC and LLVM, debuggers, libraries, and even full operating systems like Linux. This significantly lowers the barrier for developers and potentially reduces the risk associated with compiler bugs compared to newer, less battle-tested toolchains. It aligns well with the goal of potentially allowing contracts written in languages like Rust or even C++ to target Ethereum, compiled via standard backends. For those who question bugs in LLVM or GCC, strengthening security guarantees by using formally verified compilers such as CompCert[2] that are capable of targeting RISC-V today is a possibility. Thinking big, maybe even running applications on top of formally verified RISC-V operating system kernels such as seL4[3] on VMs covering RISC-V privileged ISA specification (such as the one I work on) could be a possibility for more complex applications demanding to run inside an operating system environment.

The performance concerns raised by some are valid but addressable. From my experience, RISC-V doesn’t inherently sacrifice execution performance when implemented properly. While u256 operations naturally decompose to multiple instructions, in practice in a well optimized RISC-V VM the cost of doing so for most situations should not impact performance that much. Furthermore, optimization techniques at the VM level could significantly mitigate these costs, as the RISC-V ISA is extensible enough to add custom blockchain-specific extensions that could optimize common cryptographic operations (e.g., Keccak256).

I think basing the future execution layer on a standardized, open, and well-supported ISA like RISC-V provides a solid foundation. It offers a path towards leveraging existing software ecosystems, potentially simplifying the developer experience, and benefiting from future hardware advancements in the RISC-V space.

While the road is complex, I believe the potential benefits for scalability, tooling maturity, and long-term maintainability make RISC-V a direction very much worth pursuing for the future of blockchain execution environments. Many existing RISC-V VMs in the blockchain space today demonstrate that robust, production-ready RISC-V implementations can exist. Specifically, I think the Cartesi Machine showcases the power of leveraging a standard, open ISA. It’s a stable, high-performance RISC-V emulator implementing the standard RV64GC ISA capable of running the entire Linux software stack and unmodified RV64GC ELF binaries. Crucially, it’s fully deterministic, right down to floating-point operations. For those curious wanting to grasp what it’s capable of running, I recommend experimenting with my WebCM[4] experiment, a serverless terminal that runs a virtual Linux directly in the browser by emulating a RISC-V machine powered by the Cartesi Machine emulator compiled to WebAssembly.

Now, the L1 proposal focuses on ZK-proofs, whereas Cartesi currently enables on-chain verification via interactive fraud proofs, leveraging deterministic execution, and state Merkle proofs. While the verification mechanism differs, Cartesi confirms that building a verifiable and deterministic execution environment on top of RISC-V is viable and worthwhile.

Of course, integrating RISC-V directly into L1 with ZK-proving presents unique and significant challenges, particularly around gas metering, defining the precise syscalls for state interaction, and optimizing ZK circuits for RISC-V instructions. Performance in the specific context of ZK-proving also needs deep investigation. Fortunately, there are many RISC-V zkVMs projects already doing research and development on those points.

Regarding implementation strategy, I believe serious consideration should be given to the “radical approach” of defining a protocol that enshrines the concept of a virtual machine interpreter compiled down to RISC-V. This approach would create a path where Ethereum’s core RISC-V VM could remain minimal and simple, while still being flexible enough to accommodate different VM interpreters beyond the EVM, giving developers more freedom in their virtual machine development.

In short, I believe leveraging a standard like RISC-V offers tremendous advantages in tooling, developer familiarity, flexibility, and even the potential for long-term hardware acceleration. My work experience with the Cartesi Machine reinforces the idea that RISC-V is a powerful and viable foundation for the next generation of verifiable blockchain computation, and it’s exciting to see it being seriously considered for Ethereum’s core execution layer.

[1] - /github.com/cartesi/machine-emulator
[2] - /compcert.org
[3] - /sel4.systems
[4] - /edubart.github.io/webcm

Links: New accounts are not allowed to include links, prefix them with https:/.

4 Likes

Hi, can Yul be a good choice for a zk-friendly ISA?

It does not make a hypothesis about how variables are allocated. They could be stack-based (like in EVM), register-based (like in RISC-V), or even memory-based (like in Valida zkVM). The execution layer could provide a (proven) compilation pass from Yul to its assembly language.

In addition, it is relatively standard, being the intermediate language of the Solidity compiler.

I fully agree with this. We need a great layer 1 that is capable of handling ZK proofs and advanced features. There are many here that are malicious and greedy, they act in their own self interests as they work for L2’s. In fact some of my posts have been hidden “due to community flags” that’s fine, I know very well that many here are snobbish intellectuals that don’t want to hear about the struggles common people have understanding all the malarkey that is L2’s, bridging etc. ETH price is languishing because of you math snobs. I teach ordinary people for free! I show them how to use DeFi and 100% of them have told me they’d rather just use layer 1 and pay whatever fees arise so long as they can avoid the banks.

Thanks @vbuterin for this. I finally understand the importance of RISC-V and we the masses stand for this! The intellectual snobbery in this forum as well as at ETH conferences is totally unacceptable. Be humble! Respect the talents of others. We may not be math whizzes but then who’s going to fix your car? Cut your hair? Operate on your heart? I don’t want a math professor as my doctor thank you very much!!

I’m working on a longer reply, but have a question @vbuterin : I see how this could potentially help goal 3 (ZK-EVM proving capabilities), but how does it address goal 2 (Desire to keep block production a competitive market)?

1 Like

Fantastic points here Adam! I think overall your comments make a lot sense and I think understanding what we’re trying to optimize for primarily helps us understand which trade offs we want to make and I agree about the tension between optimizing for L1 but canabalizes gains from L2 :thinking:

thinking about this more and wondering how difficult would it be to maintain backwards compatibility between EVM and RISC-V contracts at scale?

also on this could be opt-in at first, with the RISC-V VM gradually becoming dominant as tooling matures?

Not sure what the priorities are. Notice that you really want all the cryptography to move to 32-bit PQ secure stuff, like over M31 or babybear. Going down the KZG+ECC stuff will lead to things that are quantum susceptible, and ~100x slower than STARK friendly crypto.

#1 priority to get more proving scale is to embrace 32-bit logic and 32-bit primes, like M31 and babybear. If you do this, you’ll get things that are probably compatible with fastest STARK proof generation (in addition to being PQ secure)

Hi, I’m the maintainer of the previously mentioned revive Solidity compiler. Your are spot on. Exactly this is the reason for a lot of unnecessary overhead contracts written in Solidity or any other language compiled to the EVM are exposed to.

Adding to that, the big endian nature of the EVM requiring a lot of byte swaps on those u256 values makes things even worse. A big endian VM with 256 bit word size as the only type is by nature unable to compete with lets say PolkaVM or SVM (n.b. a more efficient bytecode would generally also imply more efficient execution proving of said bytecode). I really want to stress your point here.