Please don’t.
Hi.
This is not a good plan, largely based on wrong priors about proof systems and their performance.
Checking the assumptions
As far as I understand the argumentation, main arguments are (1) scalability and (2) maintainability.
At first, I would like to address maintainability.
Realistically, all RISC-V zk-VMs use precompiles to perform the computationally intense operations. List of SP1 precompiles can be found here: docs.succinct.xyz/docs/sp1/writing-programs/precompiles , you can see that it includes pretty much every relevant “computational” opcode from EVM.
Therefore, any change to cryptographic primitives of the base layer will require writing and auditing circuits for these precompiles. This is a severe restriction.
It is correct, indeed, that maintenance of “out-of-EVM” part of the execution client plausibly becomes relatively easy if performance is good enough. I am not exactly sure if it is good enough, either, but this part is low-confidence:
- Indeed, state tree computation can be done much faster by using a friendly precompile, such as Poseidon.
- It is less clear that you can deal with deserialization in an elegant and maintainable way.
- Also, I think there are some nasty details such as gas metering and various checks, they are probably in “block evaluation time” but realistically they are better quantified as “out-of-EVM” part (and these are mostly subject to maintenance pressure).
Second, scalability
To reiterate, there is no way RISC-V works without precompiles for EVM payload. It does not.
So the statement
In practice, I expect that the remaining prover time will become dominated by what today are precompiles.
is while technically correct, is unnecessarily optimistic. It assumes there won’t be precompiles. In fact (in this future world), there will be exactly the same set of precompiles as computation-heavy opcodes that we have in EVM (signatures, hashes, possibly large modular operations).
To address Fibonacci example, it is hard to judge without digging into extremely low-level details, but at least large parts of these advantages are:
- Interpretation vs execution overhead.
- Loops unrolling (decreases control flow on RISC-V part, not sure if done by solidity but even if done singular opcodes still perform a lot of control flow / memory accesses due to interpretation overhead).
- Using smaller data type.
What I want to point out here, is that to get advantages 1 & 2, you must kill interpretation overhead. That seems aligned with RISC-V idea, but this is not the RISC-V as we currently speak of it, rather it is something resembling (?) RISC-V capable of various things.
So, there is a bit of a problem
-
To get plausible advantages to maintainability, you have to have RISC-V (with precompiles) that your EVM compiles to. Which is, basically, current status.
-
To get plausible advantages to scalability, you need to have entirely different beast - something (plausibly resembling RISC-V) that has the concept of a “contract”, aware of the various restrictions of Ethereum runtime, and is able to run contracts as executables, without interpretation overhead.
I am assuming, now, that you mean 2 (because it seems that the rest of the post suggests so). I urge you to realize that everything outside of this environment will be written on whatever thing RISC-V zkVMs are currently written in, with implications for maintenance.
Some caveats
-
It is possible to compile the bytecode from high-level EVM opcodes. Compiler then is in charge of ensuring that the resulting program maintains invariants such as absence of stack overflow. I would like to see it demonstrated in normal EVM. SNARK of correct compilation then can be supplied together with contract deploying instruction.
-
It is possible to construct a formal proof that some invariants are preserved. This approach (instead of virtualization) is used in some browser contexts, as far as I remember. By making a SNARK of this formal proof, you can also achieve similar result.
-
Simplest option is biting the bullet and…
Constructing a minimal “blockchain-y” MMU
I think this is probably implicit in your post, but let me clarify once again. What you actually need if you want to get rid of virtualization overhead is execution of compiled code. That means that you need to somehow prevent the contract (which is now executable!) from writing to the kernel (off-EVM implementation?) memory, at the very least.
So, naturally, we need some kind of MMU. Arguably, approach with pages (used in normal computers) is largely unnecessary, as the “physical” memory space is almost unlimited. This MMU, ideally, should be minimal (as it lives on the same level of abstraction as the architecture itself); though possibly some features (say, atomicity of transactions) could be moved to this level.
Provable EVM then becomes a kernel program running in this architecture.
RISC-V might be not the best choice for the task
Interestingly enough, under all these conditions it might turn out that the actual ISA that is optimal for this task is not RISC-V, but, rather, something similar to EOF-EVM.
The reason for that is that “small” opcodes create, in fact, an extremely large amount of RAM accesses, which are hard to prove using current methods.
Similarly, to minimize branching overhead, in our recent paper Morgana (eprint/2025/065) we show how to prove the code with static jumps (similar to EOF) with precompile-level performance.
My recommendation is, instead, constructing a proof-friendly architecture with minimal MMU allowing to run contracts as separate executables; I don’t think it should be RISC-V; rather a separate ISA - ideally, aware of limitations dictated by SNARK protocols. Even ISA resembling some subset of opcodes of EVM will likely be better (+ as we are aware, the precompiles will be with us whether we want it or not, so RISC-V doesn’t give any simplification here).