ERC-7844: Standardized Data Management, In-Place Storage Upgradeability

The core of the idea can be understood from the name –

Consolidated: A CDS contract will typically manage storage for multiple linked logic contracts. A single CDS layer can manage an arbitrary amount of storage spaces, but CDS also supports segmentation through the use of multiple CDS layers.
Dynamic: Storage structures can be adapted post-deployment at the contract level.
Storage: The focus is on making storage evolvable and extendable in itself.

Two critical features:
Dynamic storage spaces: Segregated storage areas (defined by a hash structure) that can be created through a function invocation in a post-deployment setting. They are populated by a mapping-of-extendable-struct.

Extendable structs: A custom struct type that can have new members appended to it through function invocation.

The core objective of CDS is to introduce a standardized, full-service approach to storage management, cross-contract interoperability, and upgradeability. With CDS, storage structures can be extended and created in-place, removing the need for any manual storage management and saving re-deployments. Access and modification patterns are static, which helps to stabilize and simplify complex cross-contract dependencies in large or rapidly-evolving ecosystems – and the consolidation of storage into one (or more, if desired) CDS layers ensures efficient single-hop data access for any linked external contract.

CDS also provides a strong foundation for governance solutions. One of the core complaints that I hear about upgradeability patterns in general is that they make protocols unsafe, since faulty or malicious logic can be introduced at any time. CDS can be managed by self-executing governance systems, which effectively enables full upgradeability (critically, in terms of both logic and storage) without sacrificing any decentralization.

CDS Intuition: data is stored in a single contract, which uses custom storage structure abstractions that enable you to create and extend mapped storage structures in-place. Stored data is segmented logically through a hashing scheme, and access controls ensure that the data layer is only accessible to explicitly permissioned entities. Since data structures can be created or extended in-place, we can efficiently manage the full data requirements for evolving multi-layer smart contract systems with a single CDS layer. Since each layer shares the same data bank, we have standardized access patterns and seamless interoperability, even between logically segmented layers.

Please share your thoughts below!

@wisecameron your methodology is fascinating and tackles one of the more challenging aspects of blockchain systems: achieving true upgradeability without introducing major disruptions. The Consolidated Dynamic Storage (CDS) model, as you’ve outlined, strikes a bold balance between adaptability and complexity, and it opens the door to a much-needed evolution in smart contract design.

The idea of dynamic storage spaces defined by hash structures is particularly compelling. It introduces a modularity that not only future-proofs the contract but also allows developers to adapt to changing requirements without cumbersome state migrations. This is especially critical for long-lived systems that need to scale and evolve in unpredictable ways.

Your concept of extendable structs is equally intriguing. The ability to append new members dynamically brings a level of flexibility rarely seen in existing models. It effectively transforms storage into a living entity, capable of growing alongside the system’s needs. The tradeoff—sacrificing ease of implementation—feels justified for systems where adaptability outweighs simplicity, especially given the potential to abstract away these complexities through reusable patterns and libraries.

However, the unorthodox nature of CDS does highlight areas where the community might need to collaborate further:

  1. Auditing and security: Dynamically evolving storage structures could introduce vulnerabilities, particularly in managing boundaries and ensuring that state modifications are well-controlled.

  2. Standardization and tooling: For CDS to gain widespread adoption, robust developer tools and clear patterns will be essential to manage complexity and mitigate implementation risks.

Your willingness to embrace tradeoffs for versatility demonstrates a forward-thinking approach, and I can see this methodology being especially impactful in environments like DAO frameworks, modular dApps, and systems with high uncertainty about future requirements. Looking forward to seeing this concept evolve and how the community engages with it! :rocket:

1 Like

Nice one!

This reminds me of an old OpenZeppelin days and their Eternal storage proxy pattern.

P.S.

Have you seen this ERC? Addresses the same issue, but with a slightly different approach.

2 Likes

Great callout on EIP-6224 – the Dependency Registry definitely shares conceptual overlap with CDS, but I agree that the two systems fundamentally diverge in their approach – and I believe it’s actually to quite a great extent under the hood:

• Dependency Registry essentially consolidates the process of linking upgrades between dependent contracts in a traditional proxy-delegate model. That structural similarity is clear – there’s a central control layer that’s interacting with the rest of the system.

• CDS manages all storage itself, and leverages the extendable structs / storage spaces to scale infinitely in-place. Similar to Dependency Registry, it’s a ‘central’ contract that has calls propagated to it externally. Effectively, there is no longer any need to use proxy-delegate for the linked contracts, which are typically pure + globals (although globals can be stored in CDS as well) – their mapped storage is all delegated to the CDS layer.

One cool thing about this is you effectively get implicit delegate access between linked contracts. This access can be secured through basic permission management systems—either at a storage-space level or even down to specific struct members. This plus the ease of redeploying what are effectively pure contracts makes it really easy to integrate new systems with longstanding ones.

As a footnote, CDS effectively favors making capabilities exist and puts the onus on developers to make it secure – but critically, it can have full decentralization with permission management + a governance system. The centralization of the layer makes that process much more digestible.

Thanks for checking it out and for your reply :). Big fan of your work by the way, it’s definitely some of the most interesting stuff that pops up in my Linkedin feed. That ZK resources post has been particularly helpful.

Just took a deeper dive into the ERC. Several questions:

  1. Is it correct that the entryIndex from the spec is safeIndex in the reference implementation?
  2. It would be great to see an implementation of the put() function. Also, I think it would be great to remember the names of variables put in order to retrieve them by name. Can mapping be used?
  3. It is quite hard to understand the overall storage layout after initialization and push steps. A low-level diagram would be highly appreciated.
  4. What is the intended use of the system? Is it a standalone contract? Is it a library that manages the storage of a specific smart contact?

Thanks!

1 Like

Thanks for taking the time to dig deeper!

  1. entryIndex actually refers to the sequential index (ie; userData[0] or userData[1]). The safeIndex stores the last fixed-sized member inserted into the system for bitCount calculations, which is necessary because strings (equivalent to arrays, functionally) have dynamic sizing.

  2. Yeah, I think this is a great point regarding naming variables. My big idea for making the system more palatable in a syntactical sense has just been to build a dedicated VS code extension that provides alias visualization but that could be better. It’s true that the functionality you just described could be implemented with mappings relatively easily, with a moderate gas cost increase. I mainly opted not to include this in the base system because the core implementation is already “hard” compared to competing solutions like proxy-delegate.

  3. On it, great feedback and agreed.

  4. Yeah, the CDS layer is a standalone contract. Essentially, you link logic contracts to your CDS layer, and let it handle all of your (particularly mapped since they are more cumbersome, although globals are fine too) storage.

The centrality of the CDS layer seems like a big drawback at face value, since it’s a single point of failure AND it’s rather involved in terms of low-level logic and bitwise operations. However, I think it’s actually the ideal structure fundamentally because you can fully prevent non-authorized invocations with basic permission management, and it’s structurally more palatable and efficient than using a complex array of linked storage dependencies like you see in proxy-delegate.

Given that a CDS layer has fully-malleable storage (ie; you can introduce new storage spaces and custom extendible structs through simple function invocations) a single layer can easily support an arbitrary-sized linked system. Additionally, the “shared” storage space makes deep integrations with legacy logic trivial, which is a big plus.

The main premise behind CDS is to front-load complexity (which is nullified in practice because I will be open-sourcing a highly-optimized CDS implementation called HoneyBadger in a month or two) in exchange for making storage fully fluid and controllable. The key reasons why it’s a step up from competing solutions like proxy-delegate are because it’s:

A) More practical to extend storage structures with a function invocation rather than re-deployment. I don’t claim it’s impossible to add new struct members to a deployed system given proxy-delegate is leveraged, but it’s certainly more cumbersome and error-prone.
B) Structurally, it manages storage in a single layer, which is significantly less complex than leveraging a collection of proxy contracts or diamonds.
C) Equivalent storage space access between linked contracts makes close interoperability relatively trivial. It’s almost similar to 7208, but I think in a more complete package (at the cost of being harder to implement up-front).

You’re the man for taking the time to read my proposal and reply, those are some great points and it’s obvious that I need to include better clarifications, I sincerely appreciate it :handshake:.

1 Like

Updated proposal is now available!

  • Updated Contract
  • Significantly improved hashing structure
  • Storage-level diagrams
  • Uses RFC-2119
  • memberIndex, entryIndex, safeIndex, stringIndex clarifications
  • Example use cases
  • Clear explanation of what the system practically looks like
  • Permission management added to spec (required)

To-do:

  • string aliasing
  • Put implementation

Seeking critical comments – please try to tear it apart. I think this proposal has a lot of potential and am actively working to optimize it.

1 Like

How does ERC-7844 differ from ERC-7201 (Namespaced Storage), which already uses namespaced slots? Why introduce a new standard instead of extending ERC-7201?

1 Like

Great question! There are a few reasons why I believe it makes more sense for 7844 to exist as a standalone proposal.

For one, in-place storage upgrades (via extendible structs / storage spaces) require full custom storage management, which diverges the 7844 spec significantly from other proposals. So while 7844 and 7201 align by providing namespaced storage, they are very different under the hood. Specifically, I’m referring to the fact that 7201 maps namespaces to a storage offset, whereas 7844 uses a deterministic hashing scheme to allocate storage space metadata and live data within separate logical spaces.

7844’s unique implementation allows it to offer in-place storage upgradeability, allowing developers to allocate new storage spaces and extend their data structures deterministically, and without redeployments.

The best way to look at 7844 is as a full-service storage model that combines the benefits (ie; access controls, namespacing, standardized access patterns, upgradeability) of many individual proposals into a single context, while introducing in-place storage upgradeability as a key standalone feature that bolsters its value proposition as a full-service storage management solution.

The central concept is to provide a universal storage model that combines the best ideas about storage management under one easily-accessible banner, while abstracting away manual storage management, effectively reducing the upgrade risk surface for upgrades and simplifying both development and audits.

@wisecameron

Is it used for both permission management and full permissions?. I think you should do the table as the access matrix and then map it into the Id if View is for Read only and where the role for EditWrite.

1 Like

Great catch full permissions should have ID 6. In terms of the access matrix, yeah that might make more sense. Now that I look at it, it would probably be better to just have each of them represent a flag, and then also include a contract flag. That way, it wouldn’t imply redundant checks for the same operations.

The contract of ERC-7844 provides some access control to manage reading and viewing; however, the storage is still accessible for viewing by eth_getStorageAt so there is not much difference from the private variable.

1 Like

That’s a great point, it doesn’t make sense to include access controls for viewing data.