Long-term L1 execution layer proposal: replace the EVM with RISC-V

One thing that’s missing in the discussion so far are calling conventions and register handling.

On a physical CPU, you have a limited amount of registers, hence when a function calls another functions, it needs to save registers and restore them.

The calling convention defines:

  • who saves what registers between the caller and the callee
  • what registers are used to pass parameters
  • what registers are used to pass results
  • how the stack space is used (concept of red zone)

People are very encouraged to write small functions, meaning if they are not inlined you waste a lot of proof time proving data movements.

An ISA optimized for ZK would actually optimize for reducing those data movements. They make sense in the physical world because local memory (in register) is 15x to 150x faster than remote memory (L1 cache needs 15 cycles, L2 cache needs 100 cycles, RAM needs ~1000 cycles), but it’s useless for ZK proof.

A function has usually between 4~6 inputs and output, so naively following physical CPU calling conventions requires 2x4~6 proofs of data movements per function.

See Latency Numbers Every Programmer Should Know · GitHub, Latency numbers every programmer should know · GitHub

A close but related concept is addressing mode. Some architectures only allow operations to work on registers and require LOAD/STORE before, but what if you could do replace:

LOAD RegA <-- [addr 0x0001]
LOAD RegB <-- [addr 0x0002]
ADD   RegC <-- RegA, RegB
STORE [addr 0x0003] <-- RegC

by

ADD [addr 0x0003] <-- [addr 0x0001], [addr 0x0002] 

We get 4x smaller trace and so 4x faster prover.

2 Likes