As far as I can understand the first post, the idea is that RISC-V smart-contracts can be executed much faster than EVM smart-contracts executed by EVM interpreter, compiled to RISC-V code. Therefore, we can hope for much faster ZK proving of such smart-contracts too, when using RISC-V-based zkVM.
In practice, as argued above (e.g. the post), the actual workload involves cryptography and storage access (which is not present in the Fibonacci example). So, relying on RISC-V smart-contracts is not enough (to reach 50x faster proving time). Moreover, it can become a serious obstacle.
OK. I thought the claim was that RISC-V has already made those gains, and that was a reason to replace the EVM with it.

As far as I can understand the first post, the idea is that RISC-V smart-contracts can be executed much faster than EVM smart-contracts executed by EVM interpreter, compiled to RISC-V code. Therefore, we can hope for much faster ZK proving of such smart-contracts too, when using RISC-V-based zkVM.
Right. And Iām saying that of course running an interpreter is going to be slower than compiling anything to RISC-V. So we should compile EVM code to RISC-V. (Or whatever language is native to the proof system.)

I take it that you do not use RISC-V as you front end? That is, you can still target other back ends if you want to?
Not sure if I understand the question (what do you mean by ātarget other back endsā?), but in general our pipeline looks like this:
- We use a normal, standard compiler (like rustc, clang, etc.) to build a normal, standard RISC-V ELF file. (Alternatively, we also have a Solidity compiler which uses LLVM to compile bog standard Solidity programs into a RISC-V ELF file).
- We take that ELF file and relink it into our custom RISC-V-like bytecode and custom container. This happens offline (on the devās machine), and is the part where we apply extra optimizations to the program (like e.g. instruction fusion), simplify the semantics, and in general make it appropriate for on-chain execution.
- Then the on-chain VM (which can stay simple because we do most of the work offline when we translate the raw RISC-V into our own bytecode) recompiles the bytecode into the native code and runs it.
Technically our bytecode doesnāt have to be recompiled into the native code - it could also be interpreted, or it could be even be executed to generate a ZK proof, although we havenāt implemented a ZK backend yet. We might someday, and it would be a cool project (if someone wanted to work on a ZK backend for PolkaVM I would gladly merge that PR) but we donāt really have any immediate use for it, since we can get much better and much cheaper scaling without ZK through other means. Compute-wise last time I benchmarked it I can execute the same computation from scratch something like ~20 times in the same time itād take to verify a single ZK proof of it, nevermind needing to actually generate the proof (which is orders of magnitude more expensive), so thereās literally no point for us go the ZK route.
Technically what Ethereum could do (but wonāt due to political reasons) is:
- Take our PolkaVM and use it for L1 execution (itās a general purpose VM and itās not really specific to any particular chain). This automatically gets you wide toolchain support (you can use almost any language, not just Solidity) and blazing fast near-native execution speed for free, and we have 30+ independent implementations of the VM itself in-progress to make sure the spec is implementable independently.
- Write a ZK backend for it and do whatever ZK stuff you need to do. (It should be as fast or faster than current RISC-V ZK VMs.)
- Scale down on the ZK use for those parts of the system which now maybe donāt need to use ZK due to better efficiency of the base VM.
OK, I think I understand now, thanks. I think there would technical issues, not just political ones here, but I wonāt try to dig into those now. The politics is so bad that after ten years we canāt even add subroutine instructions to the EVM, so I donāt expect to see the EVM change in my lifetime.

So we should compile EVM code to RISC-V. (Or whatever language is native to the proof system.)
Compilation is clearly beneficial for direct (CPU/GPU) execution.
For ZKP, itās more tricky. SNARKs do not prove the program execution directly, but rather prove knowledge of the program execution trace (on some input).
Letās assume f(x)
is our program, x
is some input to the program, w
is the program execution trace (in some form). There is also some circuit C(x, w, y)
to check the trace validity, i.e. C(x, w, f(x))
should output true
for valid traces.
A SNARK prover typically proves satisfiability of such circuit (and that it knows the w
witness), which is existentially quantified (due to the additional parameter w
) and (in case of proving program execution) uniform (inputs/traces of different lengths are served by the same circuit).
So, itās drastically different from the direct execution. The difference is critical for performance, as you can transform evaluation (sub)problems to circuit satisfiability (sub)problems, which might be much cheaper to prove (e.g. for proving 1 / a
operation one has to provide some b
and check that a * b == 1
, while calculating the inverse directly will require proving much more intermediate steps).
Moreover, since the execution trace is available to prover it can select a cheaper implementation option, depending on the concrete trace content. E.g. a value can be u256
in general, but a particular occurrence can fit byte, so its properties/operations can be proved using less steps than the full u256
value.
So, the ability to inspect the trace can largely compensate absence of compilation. Static analyses/optimizations can still benefit ZKP (a call site can be recognized statically as monomorphic, so one can skip proving branch decisions). However, such optimizations are not that trivial to implement in practice:
- one need to withstand āJIT bombsā attacks
- one should prove somehow that such optimizations preserve semantics
With that said, I generally agree that compilation is the right direction. However, in the case of ZK proving of EVM code execution itās not really clear whether compilation worth implementing given the associated costs. The EVM+EOF would definitely help here (e.g. static analysis, separate validation).
Compilation is of course useful in itās own right. What Iām trying do here though is avoid the apparently high overheads of proving a program on an EVM interpreter written in RISC-V rather than proving a program written in RISV-V. Or even better, directly prove EVM programs.

Or even better, directly prove EVM programs.
This is the best option IMO.
Itās strange that Vitalik didnāt even consider it.