ERC-8006: Universal Policy Engine

ERC-8006

DApps keep re-implementing ad-hoc validation and compliance rules (roles, kyc, allowlists, limits, workflows, and other legally required validatons). It’s brittle to upgrade, hard to audit, and not reusable across projects.This ERC standardizes a Universal Policy Engine:

  1. Each rule is packaged in small contract called artifact
  2. Each artifact contract can be reused by other policies and has to expose minimal interface (init, exec) and self-descriptors
  3. A policy handler bundles/orchestrates them as a Directed Acyclic Graph so apps can compose complex policies from simple building blocks and update them incrementally (swap/append artifacts instead of redeploying app logic)
  4. All parameters flow as bytes for uniform on/off-chain integration; variables can be introspected for tooling.

It’s universal by design – applicable across data domains and attachable to any contract as method-level hooks (pre/post) to gate calls and chain validations. Reference implementation in ERC assets.

General and simplified yet precise view of how it works

ERC Authors list:
@Vlad, @Vitali

15 Likes

An excellent story for gamification processes. You can build custom complex systems resembling MPORG, conditionally WoW. Artifacts can be ers721/1155, and their improvements are coordinated by high-level policies (for example, physics from Oracle). I like your approach. Thank you.

Sorry for my English, translated from Russian.

1 Like

The mechanic looks particularly interesting for social DAOs, where actors/delegates/conditional guilds or factions can be endowed with a life cycle through artifacts and, for example, have “research trees.” But for such a system, a certain general primitive of Web3 generation experience must be built up, accessible to all interested parties. It is only necessary to take into account sibyl scams (for example, using a quadratic approach when improving levels), BUT then all these Leyer3, Galaxies, and other systems can be brought to conditional common denominators through policies.

1 Like

Hey, thanks for your interest!

  1. Yep, improvements/role escalations (or any other action) can be constrained by policies
  2. The artifacts themselves though are not originally intended to be tokens, cause the “fungibleness” is not the same here as “interoperability” and “reusability” in this case. It is rather a reverse story, policy and its artifacts can constrain erc20, erc721’s transfer method (or other hook, whatever)
  3. So, artifacts can be reused, and they not obligatory should be token-like (while technically it’s possible). The only obligation for artfact is to implement IArbitraryDataArtifact interface
2 Likes

Hey! I think there’s a small misunderstanding — ‘artifacts’ here aren’t ‘game artifacts.’
The term ‘artifact’ is used in the sense of ‘a unit intended for a specific function,’ not as ‘a unique collectible item.’ These are just functional components — like addition, subtraction, price queries, etc. So they don’t have much in common with NFTs or other game-fi artifacts.
Regarding DAO policies — while they’re not typically used to store data, I do get your point about what you were aiming to achieve with them!
You can absolutely design a policy system that evaluates a participant’s current level using some standardized inputs. Of course, the evaluation strategy can be arbitrary — say, a quadratic model. In that case, the policy effectively becomes an authorization guard.

2 Likes

Great observation — and excellent work on ERC-8006.

The Newton Protocol is tackling a very similar challenge, but from a complementary angle. In Newton, policy evaluation occurs off-chain through a decentralized network of Newton Operators, each capable of verifying rules against verifiable data sources (via MPC, TEE, and ZK-based proofs). The result of each policy evaluation produces an on-chain Proof-of-Compliance, emitted as a verifiable by-product of the network’s consensus process.

This architecture aims to make compliance and rule enforcement composable, verifiable, and chain-agnostic — allowing developers to plug in any policy engine or attach proofs directly to transactions or intents.

I’d love to hear your thoughts on potential alignment between ERC-8006 and Newton’s approach, and explore whether our initiatives could converge or interoperate. It seems like there’s strong synergy between ERC-8006’s on-chain modularity and Newton’s off-chain verifiability layer.

Looking forward to continuing the conversation.

2 Likes

Hi Dennis, and thank you!

I’m gonna deep dive into the litepaper – the on-chain policy engine is a rare thing, and I think we and any other policy builder should learn from each other and discuss this topic endlessly, especially if it’s open source.


Regarding privacy (or onchain verificaion): any logic that can be coded as a smart contract can be converted to a policy artifact. I just need to find time to create an example using verifiable credentials – the same standard that Privado.id is built on top. This should give zero friction and work smoothly. like a clockwork.

__

@denniswon Do you have any specific questions about the standard?

If you have could you share your estimates of how using ERC-8006 affects the gasUsed of a typical transaction compared to an equivalent inline implementation of the same checks using regular require statements/modifiers?
For GameFi (or MMO RPG mentioned above) or high-frequency trading scenarios on DEXes, even a small increase in gasUsed per operation (on the order of a few thousand gas) can significantly impact unit economics/revenue, especially for dApps on L1.

Am I correct in understanding that, in exchange for reducing complexity and risks when updating compliance logic (by outsourcing it to a universal Policy Handler and artifacts), we effectively shift additional gasUsed overhead onto each user transaction?

Do you have any benchmarks or profiling results for the reference implementation (for example, for policies with 3 to 5 artifacts in the DAG) that you could share? And do you consider this overhead acceptable for high-throughput scenarios?