EIP-7007: zkML AIGC-NFTs, an ERC-721 extension interface for zkML based AIGC-NFTs

I would propose that another EIP for the verifier be implemented too. This EIP hides two standards.

  1. The interface for a zkml erc721 contract
  2. The interface for the verifier contract

Making a generic verifier contract specification might be valuable as similar computations might be used in other applications beyond the ERC721 extension.

Just my two cents. I think we have three major components here:

  1. Verifier that gates minting NFTs
  2. AI model inference proof via textual data
  3. ERC-721 extension which creates hard dependency to the parent model which produced the child - result of inference

I think 1 is a general zkp verifier code so I believe just a function serves its purpose. 3 is the point of this EIP. However 2 can be very generic which may encompass cover all model inferences and AIGC NFT covers text-to-image generation pattern which is a subset of a generic model inference. I feel like we do need a separate the model inference to new EIP and this EIP should use the subset of generic model inference verifier contract (in this case, text-to-image generation).

I am not an expert in best practices of solidity and ERC standards, though I am thinking purely in principles of separation of concerns. Iā€™d be happy to hear about your thoughts.

can somebody please do let know whether the zkML Prover is provided as part of a reference implementation or if it needs to be implemented by the AIGC-NFT project issuer

The prover is model (circuit) and proving scheme specific. Thus the reference implementation only has a mock prover as example

1 Like

how are we ensuring that prompt will give same output, because in my experience almost 90% of the time, same prompt will give different outputs. So will every prompt will be considered as unique and NFT will be generated for that !!

By controlling the random seed and running deterministic computation, same prompt should always result in same output

1 Like

how should we implement this for pre-trained models !!