Model Submissions for Ethereum Deep Funding

1. Problem

Translating contributions to the Ethereum ecosystem into weights is an interesting problem. We saw this as an impact quantification problem, to assess the value each repo brings to Ethereum.

2. Our Approach

We rephrased the question to: “What is the dollar value that each repo generates for Ethereum?” Taking inspiration from the Relentless Monetization evaluation technique by the Robin Hood Foundation, which is used to measure the dollar-cost ratio for NGOs, and noting that projects like VoiceDeck also use it for measuring the impact of journalism, we decided to use LLMs like GPT-5 and Gemini to help generate a gross and net benefit calculation of each of the 45 repos.

3. Key Learnings

  1. Relentless Monetization accounts for the cost in order to give a benefit-cost ratio (which translates to “for every 1 dollar, Y dollars’ worth of value”). We needed a way to just get an amount for the overall benefit generated, as the cost to develop each repo was unavailable.
  2. In the end, we had to resort to using POML to structure the system prompt for the LLMs. This enabled deterministic responses from the LLMs, which was particularly important for the subsequent benefit report generation steps. You can view the system prompt here: [GitHub - dipanshuhappy/impact-quantifier-system-prompt] or use the GPT plugin here: [ChatGPT - Impact Quantifier].
  3. Gemini 2.5 Pro had a more conservative approach, while GPT-5 had a decent approach. To view the prompt responses for the repositories, you can look at:
  1. Gemini took a more holistic and broad approach to assessing benefit than GPT-5.

4. Solution

The LLM can access the internet and get the relevant context of the repository, like the GitHub link, and runs the following process:

Step 1: Define Outcomes

Clear outcomes of the project are defined. This includes listing both tangible and intangible outcomes, especially the readme files of each repo.

For each outcome, the number of beneficiaries reached and the benefit per beneficiary (in dollar value) is defined or estimated by the LLM.

Step 2: Measurement of Causal Effect

This step attempts to quantify what percentage of these outcomes can be fairly attributed as a result of the repository exclusively, as opposed to other factors or network effects.

Note: This technique emphasizes referencing related studies, papers, and international reports and citing them as proxy sources for quantifying attribution and benefits. Providing clear evidence of data and numbers at every step is the most critical aspect of this method.

Step 3: Calculating Gross Benefit

For each of the listed outcomes, the benefit per outcome is calculated by:

  • Outcome 1 = (Number of Beneficiaries) × (Benefit per Beneficiary)
  • Outcome 2 = (Number of Beneficiaries) × (Benefit per Beneficiary)
  • Outcome 3 = (Number of Beneficiaries) × (Benefit per Beneficiary)
  • Outcome N = (Number of Beneficiaries) × (Benefit per Beneficiary)

The Gross Benefit is the summation of all benefits thus calculated per outcome.

Gross Benefit = Sum(Benefit per Outcome_i) for i=1 to N

Step 4: Counterfactual Analysis

This calculates the net incremental benefit of the project by adjusting for the loss or gain in benefits if the repository had not existed. In some cases, the counterfactual of a repo like viem not existing was that developers would use ethers js, for example

Net Benefit = Gross Benefit - Counterfactual

Step 5: Discounted Future Benefits

Finally, the net benefit amount is adjusted for the decreasing dollar exchange value in the years following the repository’s creation.

Discounted Net Benefit = Net Benefit / (1 + r)^t

The discounted net benefit/net present value of the benefit thus calculated is taken as the outcomes (in dollars) generated per repository.

After getting the values for the gross benefit of every repository, the next step was to normalize them into weights adding up to 1. Submitting results from Gemini 2.5 Pro and GPT-5 yielded an error rate of ~10. Just by taking the average of the two results, the outcome was 35% better than taking an individual weighted approach, giving an error of only 6.8.

5. Conclusion

This solution demonstrated how Relentless Monetization can be coupled with an LLM in order to provide a value score for a repository. It does have its limitations, but I believe these can be accounted for with the right context engineering and fine-tuning, as well as by incorporating other concrete impact metrics into the weight calculations. Moreover, combining different models like Gemini and GPT yielded better scores than using each one individually. Further refinement is still in progress and I will share updates as the competition progresses.