Would agree here with @levs57, virtualization can be an intermediate step, but this will not achieve the optimal scenario (by far). Compilation to a better ISA than RISC-V will likely be needed (ala TinyRAM) given that RISC-V was never designed for verifiable computation, and then, also include a proof of correct compilation (this part may or may not be too hard, depending on trust assumptions and how far up into the compiler stack we want to prove).
Even in the case of virtualization, one would require formal proofs that (1) the virtualized RISC-V EVM code is correctly implemented (i.e., it follows the EVM specs), (2) the RISC-V circuits themselves correctly follow the RISC-V specs, (3) a formal proof that the RISC-V circuits, I/O, memory-checking mechanisms (for RAM and registers), and bootloading mechanisms (for program initialization) satisfy both completeness and soundness so no invalid proof will be accepted, and lastly (4) a formal proof that the verifier itself is indeed sound, building on top of (3).
However, part of the challenge of building these formal proofs include that the circuits and proof systems are ever changing in the current landscape, so before investing deeply into doing this, one would want to fix a zkVM architecture – but by the time this effort is completed, such an architecture may now be deemed outdated.