We’re taking care of the issues pointed out in the conversation. We’ve pushed a couple of updates here, and we’re going to be pushing a few more strong changes to improve the decentralization and abstraction.
Recap from conversations held outside of the Forum: 1) It is true that Properties as such have too much emphasis on data, and 2) storing every data point on the same container is a potential security concern, so we will be pushing an update to separate the interface a bit further. It has been suggested to rename and use “DataPoint” and “DataObjects” instead of calling “Properties”, to differentiate between the actual tag for the data and the internal functions that are used for modifying the storage.
If we separate the data into multiple contracts, we can mitigate the security concern around centralizing a ton of user data within a single ODC smart contract holding all Properties. Additionally, we would make the ODC far more scalable. This is not a big change, at least conceptually. We are proposing the following architecture:
Other than changing a few names, the concepts are quite similar:
DataManagers act the same as PropertyManagers, independent smart contracts that can implement their required interfaces and delegate some storage. The difference is that the data is not stored within the ODC, but on a DataObject with access control managed by the ODC.
ODC no longer needs to implement a ton of logic for both managing storage and access control, which means there is not a single contract centralizing all the data for all Data Containers
Data management is no longer part of the ODC, now it is part of each DataObject
This opens up the possibility of Data “Mobility” or “Portability” across ODC implementations. Meaning, if an ERC-7208 smart contract is gas-optimized for a specific task(i.e. multi-chain) and another implementation is optimized for other uses (i.e. Account Abstraction), then a DataManager and DataObject can be “moved/ported”, or “transferred” from one ODC implementation so that data can now be access-managed by another ODC implementation.
Based on the above diagram, there is no need for an IODCRegistry anymore.
We want to propose a change to the interfaces:
Instead of PropertyManagers, we propose a simple DataManager interface:
interface IDataManager {
/**
* @notice Reads stored data
* @param dobj Identifier of DataObject
* @param dp Identifier of the datapoint
* @param operation Read operation to execute on the data
* @param data Operation-specific data
* @return Operation-specific data
*/
function read(address dobj, DataPoint dp, bytes4 operation, bytes calldata data) external view returns(bytes memory);
/**
* @notice Store data
* @param dobj Identifier of DataObject
* @param dp Identifier of the datapoint
* @param operation Read operation to execute on the data
* @param data Operation-specific data
* @return Operation-specific data (can be empty)
* @dev Function should be restricted to allowed DMs only
*/
function write(address dobj, DataPoint dp, bytes4 operation, bytes calldata data) external returns(bytes memory);
}
This way, DataManagers can either read() or write() from a DataObject address (which also acts as delegated storage). Just as with PropertyManagers, DataManagers can implement other interfaces. For instance, a DataManager can also be ERC-20 if it is:
contract MyContract is IDataManager, IERC20, ...
In this scenario, MyContract would have to rely on read() and write() to access the DataObject, and implement the logic of the IERC20 functions (balanceOf(), transfer(), etc).
Multiple DataManagers can make use of the same DataObject for storage. For this to work, it is important to have proper access management.
So, how do we simplify the central Registry previously proposed (interface IODCRegistry), while still handling proper access to data?
Here is a new proposal:
interface IAccessManager {
/**
* @notice Verifies if DataManager is allowed to write specific DataPoint on specific DataObject
* @param dp Identifier of the DataPoint
* @param dm Address of DataManager
* @return if write access is allowed
*/
function isApprovedDataManager(DataPoint dp, address dm) external view returns(bool);
/**
* @notice Defines if DataManager is allowed to write specific DataPoint
* @param dp Identifier of the DataPoint
* @param dm Address of DataManager
* @param approved if DataManager should be approved for the DataPoint
* @dev Function should be restricted to DataPoint maintainer only
*/
function allowDataManager(DataPoint dp, address dm, bool approved) external;
/**
* @notice Verifies if DataObject is allowed to add Hooks to the DataPoint
* @param dp Identifier of the DataPoint
* @param dobj Address of DataObject
* @return if write access is allowed
*/
function isApprovedDataObject(DataPoint dp, address dobj) external view returns(bool);
/**
* @notice Defines if DataObject is allowed to add Hooks to the DataPoint
* @param dp Identifier of the DataPoint
* @param dobj Address of DataObject
* @param approved if DataManager should be approved for the DataPoint
* @dev Function should be restricted to datapoint maintainer only
*/
function allowDataObject(DataPoint dp, address dobj, bool approved) external;
}
The first two functions are get/set Access Control for DataManager to modify a DataPoint.
The last two are get/set Access Control for DataObjects to modify DataPoint.
We separate the DataObject interfaces into the ones handling the raw data and the ones handling the implementation logic.
interface IDOData {
/**
* @notice Reads stored data
* @param dp Identifier of the DataPoint
* @param operation Read operation to execute on the data
* @param data Operation-specific data
* @return Operation-specific data
*/
function read(DataPoint dp, bytes4 operation, bytes calldata data) external view returns(bytes memory);
/**
* @notice Store data
* @param dp Identifier of the DataPoint
* @param operation Read operation to execute on the data
* @param data Operation-specific data
* @return Operation-specific data (can be empty)
*/
function write(DataPoint dp, bytes4 operation, bytes calldata data) external returns(bytes memory);
}
As per the logic of the implementation, we only need one function as it will be handled on a case-by-case basis:
interface IDOHooks {
/**
* @notice Execute Hook on ODC ID with specified data
* This call affects only specified DataPoint and ODC ID.
* @dev DataObject SHOULD NOT change data for other DataPoint. Data for other oid's within the same DP can be modified if necessary
* @dev This call is expected to revert if the hook operation is not allowed for the user
*/
function hook(DataPoint dp, uint256 oid, bytes4 hook, bytes calldata encodedArgs) external;
}
Finally, suppose we want to have “mobility”/“portability” (i.e. the ability to move DataPoints from one implementation of ERC-7208 to another). In that case, the DataObject interface should allow this with a single function call:
interface IDataObject is IDOData, IDOHooks{
function setOdcImplementation(DataPoint dp, address newImpl) external;
}
how does the new architecture work with the Wrapper?
A valid question from the telegram channel. With this architecture, the Wrapper works as a DataManager (there can be many instances) and the assets can be stored in AssetVault contracts that extend IDataObject. That is to say, instead of a single smart contract managing all wrapped assets, the implementation may separate them into multiple ownable smart contracts. At the same time, if the implementation has “mobility”/“portability”, it is easy to move whole DataObjects worth of value from one implementation to another. For instance, in the case where ImplementationA supports omni-chain features, and ImplementationB is optimized for lower gas consumption.
We want to get your feedback and help the ERC-7208 become the Standard for adapting/connecting all standards, enabling interoperability across the ecosystem.
It is the ability to choose implementations. In other words: a DataManager may decide to entrust the management of DataObjects to a compatible ODC Smart Contract.
For instance, in the case where a DataManager Admin (i.e. Developer, DAO, or owner of the DataManager Smart Contract) sees that switching to a different implementation of Access Manager. One ODC implementation may support hooks, another ODC implementation may have omni-chain support, another may have both (making it more gas-costly), and another ODC implementation may be developed with gas optimizations for a custom use case.
It works within similar implementations of DataPoints.
We introduce a DP-Registry interface, for implementing contracts that manage access control to DataPoints. The DataManager Maintainer can allocate DataPoints through a Registry instance and delegate management if required. The ODC Implementation verifies access management.
DataManagers should be able to move implementations, provided that they are using compatible DataPoints. DP-Registries are, in a way, a mechanism for bundling DataPoints that follow the same internal structure and therefore are compatible with each other.
For simple implementations, a DP-Registry interface can be implemented within the ODC as a monolithic contract although we don’t recommend this.
For most use cases, we expect they will be using a handful of DAO-managed or semi-public registries.
Building their own self-managed DP-Registry may make sense for big or complex protocols requiring many DataPoints especially if they want to add some custom features.
In our reference implementation, the type DataPoint is bytes32;. DataPoints SHOULD know which Registry allocated them, and it MAY store information relevant to the compatibility with other DataPoints. There shouldn’t be any further limitation (from the ERC) about other implementation decisions.
Here is a suggestion for a 32 bytes structure:
/**
* Reference Implementation of internal DataPoint structure:
* 0xPPPPVVRRIIIIIIIIHHHHHHHHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
* - Prefix (bytes4)
* -- PPPP - Type prefix (0x4450) - ASCII representation of letters "DP"
* -- VV - Verison of DataPoint specification, currently 0x00
* -- RR - Reserved byte (should be 0x00 in current specification)
* - Registry-local identifier
* -- IIIIIIII - 32 bit implementation-specific id of the DataPoint
* - Chain ID (bytes4)
* -- HHHHHHHH - 32 bit of chain identifier
* - REGISTRY Address (bytes20)
* -- AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA - Address of Registry which allocated the DataPoint
*
* !!! ERC-7208 COMPATIBILITY REQUIREMENTS !!!
* - REGISTRY address SHOULD be located in the last 20 bytes of the DataPoint in ALL DataPoint implementations
* - PREFIX 0x44500000 MAY be used. This identifies the implementations with the same DataPoint structure
**/
This suggested implementation of DataPoint internal’s structure holds 32 bits for the chain identifier, which may be used by an omni-chain ODC implementation to sync data across blockchains.
Congrats on the improvements and simplification of your architecture. I really appreciate the new design; it’s much easier to manage, adapt, and export.
I have a couple of questions regarding the implementation of ERC-7208:
If a protocol or developer wants to implement and use ERC-7208, do they essentially only need to focus on the ODC and Data Manager SCs ?
Are the DOs (Data Oracles) common for all ODCs on a particular blockchain?
Looking forward to work with that, looks really promising
@Lao0ni Thanks for your questions! Super valid points, and it is very important to clarify a few things here:
Short answer: Depends on the use case. If you want to use this standard, you can:
No dev requirement: as a user of any Data Manager exposing any interface, you won’t know the difference
Minimal dev requirement: By Integrating with it by just developing a Data Manager implementing your business logic, re-using someone else’s ODC implementation, and Registry for allocating Data Points, and finally someone else’s Data Objects for managing your Data Points.
Some dev requirements: By Implementing your Registry, either if your use case requires a separate space for storing Data Points or if you want to have a different Data Point internal structure.
A bit more effort: By implementing your own ODC Smart contract or a custom Data Object internal logic, you may want to have a cross-chain storage sharing mechanism for storing assets or an on-chain gating mechanism embedded into the low-level storage management.
Coincidentally, this new architecture means that the ODC doesn’t have to be an ERC-721, which in turn means ERC-7208 no longer relies on it. If you wanted to have an ODC implementation where each data container is an NFT, you can, but the standard should not force you to do that
We expect that for most instances, the motivation to use ERC-7208 will come from a use-case that either:
Requires interoperability of assets: i.e. if you want to develop a mechanism for trading ERC-1155 through ERC-20 interfaces, you can use a DataObject for storing the mapping of balances, and then expose as many Data Manager interfaces as you want.
Requires adapting one standard to another: i.e. if you are collecting ERC-721 NFTs and you want to make them Rentable (ERC-4907), you would store the NFT in a DataObject and expose it through a DataManager implementing the ERC-4907 interface.
Requires modifying the logic of a particular asset: If you have a certain RWA and the regulatory framework on your current jurisdiction has changed, you may require an update to the Smart Contract managing such asset. But if you are not the owner of that contract, you won’t be able to update it. The solution would be to abstract the asset from the logic that governs it, and this is where ERC-7208 shines
We also expect that, as the Ethereum ecosystem grows, so does the need for On-chain Adapters to address incompatibilities. The other day a close friend of mine was traveling abroad, and he forgot his adapter to charge his phone. Good thing universal adapters are cheap, right? Well, ERC-7208s are the same thing: they enable interoperability between on-chain contracts following different standards.
DOs are Data Objects, not Oracles. We took the Property concept (now Data Point), and moved it to a separate contract (Data Object) to be more scalable by decentralizing the necessary logic for handling this storage. Since the low-level data-management logic no longer sits on the ODC Implementation, it is a part of the Data Object, and so we don’t need Restrictions anymore as they can be implemented by the Data Object. The Oracles as you know them can be an implementation of a Data Manager.
About the particular blockchain… it depends on the implementation. Neither Data Points, nor anything that uses them is required to have a ChainID or Domain Separator. However, it is recommended as we are providing omnichain support on our reference implementation.
GM, gm… I wanted to share an overview diagram for a use case proposed by a community member on the telegram channel.
This architecture enables transparently trading of fractionalized assets across blockchains, by integrating with LayerZero. In chain A, assets are wrapped through a Wrapper-Fractionalizer DM contract. The assets are owned (“stored”) on an Asset Vault, managed by a Vault Data Object. In the chain B, the Data Manager emits the fractions that can be traded. An ODC Omnichain Proxy implementation of the ODC smart contract handles the synchronization and message passing across chains.
A slight correction on the previous diagram, as pointed out by a community member: The green and purple dotted vertical lines to the left of the diagram, should go through the ODC Omnichain Proxy and then through the ODC implementation on Chain B, representing the ODC validation of access control rights, before accessing the OmnichainFuntibleFractionsDO. In other words, cross-chain DataObjects like this can be developed, but they must abide by the access management imposed on the ODC Implementation (read()/write() functions).
Another good point brought up by community members:
What if the DataManager needs to send a native token and store some values on the DataPoint?
The question is referring to the write() functions on ODC implementation and DataObject. write() may be required to be payable by the use-case, in which case we should update the ERC to reflect this behavior:
write() MAY be payable in the case of DataObjects and ODC Implementation
After a discussion on the community chat, we will remove the allowDataObject and isApprovedDataObject functions from the ODC interface. The rationale is that those two functions were used for access management control from ODC to DataPoint. However, DataPoint management should be on DataObject through the DataManager Admin. DataManager is the one requesting DataPoints to the Registry; therefore, the responsibility of managing DPs is on the DataManager. We don’t want the ERC to force all ODC implementation to mess with DataPoint access management. Anyone can propose an ODC implementation that can have extra control, but this shouldn’t be required of the ERC.
A proposal has been made for changing the name of a major component:
Rename the ODC to Index or DataIndex
The rationale is that, after the decentralization of Properties into DataPoints, the ODC structure is no longer a “data storage unit”, but a data index of Data Objects that manages the actual storage of Data Points. Therefore, in the current state, the smart contracts acting as “data containers” are not the ODC Implementation, but the DataObjects. The current ODC Implementation acts as an ACL intermediary layer between the Data Points and the Data Manager. It is only a “Data Container” because it is a point of contact for Data Managers. Since Data Managers are “aware” of Data Points, the storage is not abstracted from them further supporting the argument in favor of renaming IODC into IODI or something similar.
The interfaces would be:
DP → indexed low-level data storage (bytes32)
DP Registry → defines a space of data point compatibility/access management
DI / ODI → indexes information and approvals (former ODC Implementation)
DM → interfaces with user, implements business logic
DO → logic implementing data management
As you are aware, we are close to pushing the ERC to “In Review” state. A change like this has the potential to impact the whole ERC, so we want to open up the question to the community before following through with the changes.
in preparation for moving the ERC to “In Review”, we just pushed an example implementation here
The implementation makes use of the abstraction of storage in favor of an implementation that drives interoperability between ERC-1155 and multiple ERC-20 contracts.
This is an educational example and has not been audited, therefore we do not recommend using these contracts in any production environment
Apologies for jumping in here without reading all the previous comments, but I wanted to check if there are any opportunities to borrow terminology from already existing object oriented design patterns. A lot of the concepts introduced in this proposal feel vaguely similar to ones used in the Java world in particular.