EIP Draft: Multi-chain Governance

Usul was built to be another tool in a set of composable tools. With the SekerDAO app we targeted only bridging two chains for scalability purposes (vote on the cheaper chain and bridge the execution to the more expensive one where you need state updates). We also only targeted the Gnosis Chain because there wasn’t a canonical solution for bridging every chain that would make it easy, which is were Nomad comes in.

…But composing Zodiac modules together I believe we can achieve the multichain governance you are suggesting.

I think it requires just adding one more module to our ecosystem (or voting strategy contract), something like a simple “aggregator” module that just waits for each chain to signal consensus that a FOR vote has passed. After collecting all of the chains approval it would then unlock a multisend transaction to various bridge contracts to engage the receiving endpoints with calldata to enact state changes for each chain.

  1. Deploy Gnosis Safes on every chain that will participate in governance.
  2. Deploy Usul modules on every chain that needs to vote. Give this module full control of the safe.
  3. Deploy Bridge Module endpoints on every chain that needs to receive the outcome of a cross chain vote. Give this module full access to the safe. The bridge sender must be the aggregator module to avoid rogue proposals from injecting calls into the safes.
  4. Proposals are submitted to each chain to send a bridge tx to one gnosis safe that simply says (yes, consensus was reached from the voters on this chain to signal a FOR vote for proposal hash 0x…).
  5. These transactions hit the aggregator contract, and once all of the configured chains have signaled approval to the aggregator contract, it will then unlock the multisend transaction to bridge back to every chain that needs state updates.
  6. The bridged data could just be a message stating that an aggregate proposal is passed in Usul, where each call can then be made, or the actual data just sent directly to the Safe.

I would need to spend a bit more time thinking if I missed anything (particularly with the trust assumptions introduced with multichain governance) or if this can be done without introducing an aggregator module (I’m pretty sure this can also be done with Usul voting strategy that acts as the aggregator actually, which would just be one simple contract that needs to be audited).

Some suggestions for your method…

This seems to require every voter to supply the proposal data for every vote, I wonder if there is a cost tradeoff of having every voter do the hashing calculation vs having one proposer store the data on-chain and create an identifier that everyone votes with.

Should be able to set the quorum correctly on each chain and simply send a “signal” to the Root by calling chainPassedProposal() and save some data. This might actually be necessary if you use OZ or Comp code directly since they read the totalSupply() from the token contract, which might be only the bridged token and not the actual totalSupply across all chains.

Thanks again for taking the time to think about how this can be done in multiple ways, ideally we could find a standard that doesn’t require using one implementation, which is one of the main motivations for Zodiac. You don’t need to use Usul as the governance module and could use any set of contracts that interface with an “Avatar”.

2 Likes

I completely agree! The need for a standard is what is driving this proposal. A common spec will encourage an ecosystem of governance interfaces, analytics, aggregators, and whatever else people can dream up. Tally and other DAO aggregators will be able to scale with us into our multi-chain future. Let’s keep this conversation going.

Based on what you said, it sounds like the Gnosis Safe has a lot of modules that will make implementing the spec pretty straightforward. Having many reusable parts allows developers to move much more quickly. However: what is the “whole” that these parts form? If we view it like a vehicle, then we see that Gnosis has a great engine and nice wheels but what is the driving experience? How are the engine and wheels controlled? This is what we need to determine. This is the “aggregator” module you mention.

Finding the Abstraction

The specification needs to capture the right level of abstraction; it needs to be narrow in scope such that it is useful but also composable.

To me there are two important features to multi-chain governance:

1. Proposals include state changes across multiple chains
2. Token holders across multiple L2s and chains can vote on the proposals

Given that we don’t fully agree on how to tackle voting, let’s put voting to the side for the moment and focus on the first point: how can we standardize multi-chain proposals. This is where we have the most commonality right now. No matter how you vote, code needs to be agreed upon and executed on multiple chains.

Multi-Chain Proposals

At a bare minimum we should have a common interface to introspect multi-chain state change proposals. A user of the spec must be able to:

  1. Easily find proposals
  2. View the state changes
  3. Know whether a proposal has been executed

All signalling should be done through events and on-chain data (this is an EIP after all).

Find Proposals

Proposal data should not be stored on-chain. There could be many state changes in a proposal, which would make storage prohibitively expensive. Instead, we can identify proposals using a content hash: the contents of the proposal is hashed to form a unique and verifiable identifier. Let’s call this identifier the proposal hash.

Depending on the implementation, proposal data includes state change data as well as consensus data. However, being a multi-chain system, execution and consensus may occur on different chains. We should separate that data. Let’s introduce another content hash for the state change data called the state change hash.

We now have:

stateChangeHash = hash( stateChangeData )
proposalHash = hash( stateChangeHash, consensusData )

To make this data available off-chain we’re going to need two events.

Let’s emit the first event from a contract we will call the StateChangeOrigin:

interface StateChangeOrigin {
    event StateChangeCreated( stateChangeHash, ...stateChangeData );
}

This event includes the computed stateChangeHash as an indexed topic to make proposal discovery easier.

The second event captures the consensus data. By its nature, consensus must occur in a single place, so let’s call the second contract the ConsensusRoot

interface ConsensusRoot {
    event ProposalCreated( proposalHash, stateChangeHash, ...consensusData);
}

The ProposalCreated event emits the proposal hash as an indexed topic as well.

By listening for events from a group of StateChangeOrigin and ConsensusRoot contracts a viewer will be able to put together the whole picture of the proposal. These contracts may or may not live on the same chain, or they could even be the same contract!

View State Changes

Users must be able to see what the state changes are. We need to standardize the data format for multi-chain calls. What do we need to know for a call? At a minimum, we need to know:

  • The chainId on which the call is occurring
  • The caller who is the one doing the calling
  • The target of the call
  • The calldata for the call

We can define the “state change data” as being an array of structs of the above:

struct Call {
    uint chainId;
    address caller;
    address target;
    bytes callData;
}

interface StateChangeOrigin {
    event StateChangeCreated( bytes32 indexed stateChangeHash, Call[] calls );
}

Now users can see what changes a proposal is going to make. For now, let’s hand-wave the details of stateChangeHash = hash(calls).

Know the Proposal State

Users need to know whether a proposal was successfully executed. This data should be available both on-chain and off-chain through events.

Given that a proposal can be executed on multiple chains, we’ll need to track execution per caller.

The Caller contract is the one that executes the proposal, so it must provide an event and on-chain accessor:

interface Caller {
    event Executed(bytes32 indexed proposalHash);

    function wasExecuted(bytes32 proposalHash) external view returns (bool);
}

The Executed event must be emitted when the execution occurs. The wasExecuted function allows on-chain contracts to determine if a proposal passed, and behave accordingly.

Summary

Let’s bring the above all together:

struct Call {
    uint chainId;
    address caller;
    address target;
    bytes callData;
}

interface StateChangeOrigin {
    event StateChangeCreated(
        bytes32 indexed stateChangeHash,
        Call[] calls
    );
}

interface ConsensusRoot {
    event ProposalCreated(
        bytes32 indexed proposalHash,
        bytes32 indexed stateChangeHash,
        bytes consensusData
    );
}

interface Caller {
    event Executed(bytes32 indexed proposalHash);

    function wasExecuted(bytes32 proposalHash) external view returns (bool);
}

These interfaces will allow a third party to:

  • Find proposals (via indexing)
  • View the state changes (by interpreting encoded event data)
  • Know whether a proposal has been executed (by looking at multi-chain callers)

Open Questions

  • Is this enough for a multi-chain proposal MVP? Does this spec need more? Is it too much?
  • what details are missing?
    • Add value and delegateCall on the Call struct?
    • Add bytes extraData on the event StateChangeOrigin for extensibility?
  • Is this spec flexible enough to support multiple voting EIPs?

Note on Gas

You can ballpark the gas usage using evm.codes. The most expensive part will be the hashing of calldata, so let’s calculate the hashing costs:

  • Estimate 6 words per Call struct
  • Estimate 6 calls per Proposal
  • Estimate each proposal has 4 words of consensus data

Total bytes per proposal: (6*6+4) * 32 = 1280

Plug that into the SHA3 opcode and it costs 393 gas (worst-case). A cold SLOAD is 2100, so it’s cheaper than loading from storage.

Summary

This pared-down version of the EIP keeps the proposal standard, but is flexible enough for implementations to tackle consensus their own way.

What do you all think of this? Is this more narrow scope a better starting point?

1 Like

Thank you @Brendan for kicking this off, and Auryn and Nathan for your contribution thus far.

Just wanted to signal our interest from 0x protocol on the topic. We have deployed the protocol on 7 different EVM-compatible blockchains and are planning to setup an onchain binding “embassies+central government” governance system, controlled by ZRX holders.
We implemented a community treasury with onchain binding voting on L1, and that might evolve into ‘local treasuries’ too.

The GovernorBranch/ GovernorRoot pattern seems promising at a first glance. Intuitively, I tend to prefer leaving voting strategies and voting power calcs to each protocol/system, and land on standard for cross-chain state updates.

We’ll be watching this discussion closely! :pray:

3 Likes

Hi, Raf from Tally here. I’ve was talking about this spec with Brendan offline a few weeks ago. I’m finally catching up on this thread. I like the idea of splitting the two problems – cross-chain voting and cross-chain execution – into separate specs. Seems easier to agree on the path forward and to solve that way.

Lots of DAOs are looking at how to solve this problem. I’ve also been following the discussions of how Uniswap DAO plans to solve it.

In our experience at Tally, storing stuff on-chain is expensive, but it’s a huge benefit for interoperability and UX. Maybe a happy medium would be to include optional fields for URI that point to the un-hashed data. The URLs can obviously become unavailable, but that at least makes UX easier when everyone is cooperating. It also makes it more obvious when someone isn’t cooperating by withholding data.

Naming this event ProposalCreated causes a collision with the OZ Governor event of the same name. Do we want to reuse the name? That makes it impossible to create a contract that implements both the OZ Governor interface and this cross-chain execution interface. I’d suggest using a different name.
Alternatively, we could overload the existing ProposalCreated event by putting the body of cross-chain event in the description. That’s a pretty ugly hack, but it would give us backwards compatibility without needing to upgrade all the existing contracts.

I’d also suggest including canonical start and end times for the voting period in this event. Otherwise, how will the off-chain votes know when to start and end the vote? We should use wall clock times, not block heights, because we can’t assume that other chains know the block heights on the root chain.

Even if we do have start and end times, voters can still do timing attacks by moving tokens between chains because the vote won’t start at exactly the same time on all chains. I’m not sure how big of a deal that is. I’m ignoring it for now.

What’s the correct behavior for a proposal that passed, but whose execution reverts due to an error? We could only allow execution once, but the error may go away in the future. I would suggest letting anyone keep trying to execute the proposal, but only for X amount of time. I don’t think we need to change the interface to support that possible requirement. It’s internal to the execution logic on each chain.

Overall, I like the direction that this spec is going. I’ll post more thoughts as I come up with them.

2 Likes

Proposal Data URI

This is interesting to me! My assumption was that including the events would be enough, as the proposal data would be emitted. Do you feel that events are insufficient or impractical?

We’ve also found that events aren’t the most convenient, as they require an archive node to access. We’ve been using subgraphs for data, but the Graph network doesn’t support all chains.

If we added a proposalDataUri as you suggested, then the proposal data could be sourced from anywhere. For example, the data could be stored on:

  • IPFS
  • Web 2 host
  • The Governor Branch contract itself (proposal origin)
  • A separate chain used for data availability

To support all of the above, it makes me think:

  • The spec will need to define a JSON schema (just like ERC721 metadata)
  • We could additionally specify that the uri can be application/octet-stream with the standard ABI encoded data; that way contracts can store the data themselves (if gas is cheap :wink: )
  • Is it possible to use a separate chain for data availability? Can a URI point to a function on a chain?

I like the proposal data URI idea! Having a JSON schema and abi-encoded application/octet-stream would go a long way in terms of flexibility (and we can always still have the event).

Timestamps

This is an interesting one…in the voting spec above I outline how “epochs” can prevent double voting (h/t to @frangio for the idea!). The “start epoch” of the vote would be recorded; not the wall clock start time.

It seems to me that start and end times are actually a property of voting, so I hesitate them to include them in the proposal spec. However, perhaps we can broaden the meaning: the “start time” could instead the more abstract “created at” timestamp of the proposal. It would be useful for proposal display, regardless of voting. The ‘end time’ of a proposal may not be applicable to all implementations. Thoughts?


@auryn I’ve heard that the Gnosis Chain is going to be used as a kind of “governance chain”. Is there substance to that? What do you think of the data availability aspect?

Hey folks!! I’ve been following this conversation closely and have so much to chime in!!

For those who don’t know me, I’m the Protocol Lead at Nomad, and have been working closely with @auryn and Nathan on the aforementioned Zodiac module.

Me me me :sparkles:

Execution vs. Voting

First things first: I strongly agree with the idea of separating concerns between proposal execution and voting. These are two different design spaces which both need to be solved for. I recommend we start with cross-chain proposal execution.

Reasoning

  1. For a cross-chain protocol, voting doesn’t necessarily need to be cross-chain, whereas proposal execution almost certainly does. A lot of large existing DAOs have well-working solutions for voting on a single chain (e.g. GovernorBravo), which they may not want to migrate from any time soon; many of these same DAOs have very broken experiences for cross-chain proposal execution, which is becoming more pressing to solve.
  2. Multichain voting is a much larger design space, whereas cross-chain proposal execution is a tighter design space to start with. I have participated in a lot of discussions with teams about multichain voting already, and the needs can vary widely per-team. Over time I am definitely excited to participate in how this design space evolves!

GovernanceRouter.sol

Next things next: I was glad to see some of the design conclusions @Brendan came to in his post about Multi-Chain Proposals, as many of them tightly mirror the Nomad GovernanceRouter.sol contract, a contract for cross-chain proposal execution which I designed nearly over a year ago!! (Anyone who checks it out, feel free to provide feedback!)

Similarities

(1) A struct of calls to be passed across chains:

    struct Call {
        bytes32 to;
        bytes data;
    }

Two questions about the proposed struct:

  • address caller: why include this field? message passing protocols (that I know of) include the address of the caller on the origin chain so that it can be authenticated. in my mind it would be an attack vector for this to be configurable by end users, unless I misunderstand
  • chainId: I felt it was best to batch an array of Calls on a per-chain basis. the array of Calls for each destination chain is hashed and sent to the destination chain, to be executed in one atomic batch. in that world, a chainId wouldn’t need to be part of individual Call struct, it’s associated with an array of Calls. I think that atomic execution of all of the Calls on one chain is quite desirable to maintain

(2) An event emitted when a state change is executed, mapping to event Executed above:

event BatchExecuted(bytes32 indexed batchHash);

(3) A queryable function inboundCallBatches which tells the caller the status of a batch, similar to the function wasExecuted above.

    // call hash -> call status
    mapping(bytes32 => BatchStatus) public inboundCallBatches;

    // The status of a batch of governance calls
   enum BatchStatus {
       Unknown,    // 0 - batch not yet delivered. may not exist.
       Pending,    // 1 - batch delivered, but not executed
       Complete    // 2 - batch executed
   }

Note: in my experience executing cross-chain proposals, returning the status enum - not a bool - to tell the status of a Batch has been very helpful in practice, such that off-chain actors know when it is possible to execute a batch. See this script, which checks if a batch has been received, then executes it.

Differences

(1) I opted not to emit a similar event to StateChangeCreated, because Nomad emits an event for every message sent, but I would be super open to revisit that decision


(2) Emits this event when the cross-chain containing the state change is delivered:


    event BatchReceived(bytes32 indexed batchHash);

Again, it’s helpful to know that additional intermediary step for a Batch, before the Batch is Executed.

3 Likes

Ah good point, thanks for clarifying. I agree that event logs are sufficient. I didn’t realize that your proposed StateChangeCreated and ProposalCreated include plaintext data before hashing. That should be enough for anyone to reconstruct the proposal.

Yes, I agree that we want to separate cross-chain execution from cross-chain voting. The cross-chain voting spec should have its own event with whatever metadata the branches need. We can figure out the right fields, such as epoch vs created_at, when we implement or create a spec for voting.

Awesome. Thanks for sharing! Seeing your working implementation helps me a lot to understand the details, especially around keeping track of branch execution with BatchStatus and BatchReceived. That bookkeeping state seems super-useful.

a StateChangeCreated state might be helpful to abstract away the bridge. Indexers might not want to have to keep track of events on bridges, and different bridges might not even emit the same events.

1 Like

I’m so glad to hear that!! This implementation has been putting in work, as Nomad executes cross-chain proposals with it regularly (most commonly to deploy & enroll a new chain).

I totally agree it would be very nice to have an event indexers could pick up that indicates initiating cross-chain call execution :slight_smile:

Do you think that event should be emitted once for each destination chain, or once with a bundle of every destination chain? Personally, I’d say once per destination. Perhaps a higher-level Proposal event could include a bundle.

As a side note, I’d propose exploring some more general naming:

    event CrossChainCall(uint32 destination, bytes32 callHash, Call[] calls);

Hey Anna! Nice to meet you. I did a deep-dive into Optics before Nomad emerged. I like the tech.

You’ve raised a lot of great points, in particular:

  • Batch semantics. Group calls by caller
  • Batch status: a function on the executor lets us know the status of the batch

Both of these make a ton of sense to me.

However, you mention that you don’t see a need for the address caller. After looking at the GovernanceRouter code, I think I see something very interesting!

Imperative vs Declarative

Here is what I’m looking at in particular:

function executeGovernanceActions(
    GovernanceMessage.Call[] calldata _localCalls,
    uint32[] calldata _domains,
    GovernanceMessage.Call[][] calldata _remoteCalls
) external onlyGovernorOrRecoveryManager;

While you haven’t explicitly declared the caller and the chainId for the array of calls, they are implicitly encoded in the ‘domains’ array and by who is listening on the other end. IIRC the domain is a xapp connection, so you are defining the receiver of each remote call batch. (afaik there is no multicasting yet?)

In a sense, Governance Router proposals are imperative in that the proposal is telling the router how to execute the proposal. A third party looking at the proposal would have to know the Nomad transport semantics to know who is calling who on what chain.

By including the caller and chainId, the cross-chain proposal becomes declarative; it doesn’t care how the bridging is accomplished, but it knows what the outcome will be. Contract X on chain Y will call function Z. If there are multiple batch executors on one chain, then each knows what it must do.

Is there a way we can marry these two? How can we keep the proposals transport-layer agnostic?

Ahh I see what you mean now. address caller was meant to indicate the contract on the destination chain that will receive the cross-chain message which contains the calls, and then execute the calls. So, there is value in including the address of the caller on the event emitted, so an indexer can “know” where to look for these events - however, a couple things to note about the caller role.

Specialized Caller

Regardless of the transport layer being used - the caller must be a special contract, which implements a function to make it capable of receiving cross-chain messages from that transport layer. In Nomad, that function is called handle; in other transport layers, there are other names (there has been no cross-chain message passing standard, yet!)

The caller does the following:

  1. handle incoming cross-chain messages from some transport layer
  2. perform access validation on the message (e.g. check it’s coming from a permitted source)
  3. decode the Calls within the message
  4. execute the Calls

To perform this role to the fullest, the caller would be the contract in control of permissioned roles on a protocol. In most cases, those permissions tend to aggregate to a single contract. For example: GovernorBravo usually holds all the permissions for a given protocol, and often custodies treasury funds too.

Multiple Callers?

I’d estimate that in the vast majority of cases, that, because

  1. the caller must be a special contract
  2. the caller holds privileged permissions on the protocol

This would mean that the vast majority of DAOs would opt for one caller contract per chain.

That being said, doesn’t really matter - we can emit the following event once per (destination, caller) tuple:

    event CrossChainCall(uint32 destination, address caller, bytes32 callHash, Call[] calls);

Knowing that, usually, there will be just one per chain.

(Incidentally - at Nomad, we think of the (chainId, address) tuple as an address’ “identifier” in the cross-chain world. Just an address is no longer sufficient.)

Naming

Personally, to me the name caller is a bit confusing because there are so many potential callers in the process.

  • the EOA that sends the transaction that initiates the cross-chain call on the sending chain
  • the contract that actually calls the transport layer to initiate the cross-chain call on the sending chain
  • the EOA that sends the transaction that executes the calls on the destination chain
  • the contract that actually executes the calls on the destination chain

All of these could be fairly called a caller. What do y’all think?

2 Likes

I agree that caller is too generic, and that it will need specialized logic in order to receive an authorized batch of calls. Being less abstract will make the spec much clearer. Call it Branch? Remote? Open to ideas.

It feels like the conversation is starting to gain focus- so I want to take a step back and reframe what we’re talking about.

Recap

The proposal lifecycle can be boiled down into three steps:

  1. Proposal is created
  2. Proposal is voted on
  3. If passed, proposal is executed

This thread has touched on all three of these steps, but we have the most common ground in the third: proposal execution. Most protocols will need to coordinate and execute state changes across multiple chains. That’s a given.

In fact, this is exactly what the Toby has written up in the Uniswap Universal Governance Module. They want to remote control contracts.

The Uniswap RFC says that they are evaluating different vendors. In all likelihood these vendors have different interfaces: the resulting module will be proprietary.

It seems like we all agree that cross-chain execution is the biggest and most common pain point: so how about we start there? I would love to be able to swap different bridges: for example start with a native bridge then swap out for Nomad or another solution (or straight to Nomad :wink: )

You used a word in the Governance Router that I thought captured it well: remote. We’re talking about “Remote Execution”.

Standard for Remote Execution

It seems that we all want a standard for Remote Execution. This is really ground zero for multi-chain system: executing cross-chain calls. The standard should be comprehensive enough to be useful, but small enough to be easy to implement.

What should the goals of the standard be? Perhaps:

  • Make it easy to trace protocol execution across multiple chains, regardless of transport layer.
  • Make it easy to swap out transport layer

Tracing would be easy to do, as we really just need to standardize events, like we’ve been talking about.

Being able to swap out the remote execution puts more constraints on the implementation, but would be incredibly useful.

Using your language, imagine we had two contracts:

  • Router: send execution batches to remotes
  • Remote: receives batches and executes

The spec could define that:

  • remotes are keyed on (chainid, address)
  • events are emitted to help with tracing
  • a Router function sends batches
  • a Remote function checks batch status

Thoughts? I feel like we’re starting to gain some clarity here!

1 Like

Hey, not sure if this has come up yet, but have you considered using ChainAgnostic’s standards track? GitHub - ChainAgnostic/CAIPs: Chain Agnostic Improvement Proposals

I didn’t know about CAIPs! But what we’re talking about it pretty EVM-specific; I think it fits well here.

1 Like

Thanks everyone for narrowing the scope. Focusing on a standard for remote execution makes a lot of sense.

The risk now seems to be the fact that such standard could apply to ‘arbitrary’ remote execution, while we all started with the governance use case in mind.

Unless you guys think it won’t affect the design space that much, it might be useful to restrain this standard to governance applications to start. That can help address questions like:

  • multiple callers/remote/branch? Frankly, I think we don’t need that in the case of governance, as only one designated ‘branch’ contract should be able to receive, validate and execute proposals. Agree with @anna-carroll here
  • naming being the most difficult in programming (esp. protocol programming), I actually quite liked the GovernonRoot/GovernorBranch proposal at the beginning of the EIP - however I understand it could be messing out with proposal creation and voting, something we’re not interested in. Not really fond of caller

My suggestion might be completely off. If it’s easy enough to abstract that to arbitrary remote execution, then let’s go for it!

1 Like

I think we can have our cake and eat it too! That being said, I agree @mintcloud. Let’s analyze it with our governance hats on so that we can nail that use case. Once we’re happy with it we can put our protocol hats on and see how well it works for other use cases.

Something I realized over the weekend was that the sender of the message can’t necessarily guarantee who the recipient is. The Router could include the Remote’s chainId and address in the message, but anyone can decide to execute that message. It’s not up to the Router. And, for a bridge implementations like Nomad, the recipient is implicit in the domain code.

Instead, it’s the Remote that must be aware of the Router. The remote must validate the message sender. This is evidenced by both the Nomad Governance Router and the OpenZeppelin Cross-Chain Aware contract implementation.

The Nomad GovernanceRouter has a handle function that receives remote calls. Note the onlyGovernorRouter modifier:

    function handle(
        uint32 _origin,
        uint32, // _nonce (unused)
        bytes32 _sender,
        bytes memory _message
    ) external override onlyReplica onlyGovernorRouter(_origin, _sender);

The new OpenZeppelin cross chain contracts are implemented as being “cross chain aware”, in the sense that they are aware of a cross chain sender, and can authorize accordingly:

    function _crossChainSender() internal view virtual override onlyCrossChain returns (address) {
        return _sender;
    }

    function processMessageFromRoot(
        uint256,
        address rootMessageSender,
        bytes calldata data
    ) external override nonReentrant {
        _sender = rootMessageSender;
        Address.functionDelegateCall(address(this), data, "cross-chain execution failed");
        _sender = DEFAULT_SENDER;
    }

The OZ cross-chain plumbing is a little more low-level, but it illustrates how they are providing easy access to the cross-chain origin so that the receiver can authorize the call.

Finally, I do want to mention the Curve Gauges. While we’re thinking about this in terms of governance, it’s very clear that protocols will need this for their interactions as well. The Curve sidechain Gauges are a great example of this. They’ve cut and paste gauge code for each of Arbitrum, Polygon, xDai, and others, then replaced a small piece of the internals to send a message across the relevant bridge. If they had had a remote execution abstraction they could have re-used so much more code.

So with that being said, I would tweak the Remote Exec outline above to instead say that the Remote should be aware of the router, and have accessors for the router / chainid.

Recipient Validates Sender

Yes, the message receiving contract should be “aware” of the address of the message sending contract.

Note the same functionality in Zodiac Nomad module - it calls the authorized sender the “Controller”:

  /// Address of the remote controller which is authorized
  /// to initiate execTransactions on the module from a remote domain.
  address public controller;
  /// Domain of the controller which is authorized to send messages to the module.
  /// Domains are unique identifiers within Nomad for a domain (chain, L1, L2, sidechain, rollup, etc).
  uint32 public controllerDomain;

when the Zodiac module receives messages from Nomad, it validates that the message comes from the Controller:

function handle(
    uint32 _origin,
    uint32, // _nonce (unused)
    bytes32 _sender,
    bytes memory _message
  ) external onlyValid(msg.sender, _origin, _sender) {

where onlyValid in turn calls

require(isController(_senderAddr, _origin), "Unauthorized controller");

Cross-Chain Owner

You can think of the Controller like the cross-chain version of an Owner. Instead of using onlyOwner which checks that

msg.sender == owner

we instead have to check that

origin == controllerDomain && 
sender == controllerAddress

Again, with the (domain, address) tuple being the unique identifier in the cross-chain world - address is no longer sufficient.

Naming, Again

I want to note that Router is a naming convention that @prestwich and I started using for cross-chain application contracts which are isomorphic in nature. That is, these cross-chain applications (xApps) have the same code on every chain; they must contain the logic for both sending and receiving messages.

Nomad’s Bridge and core Governance xApps use the Router pattern. They contain both message sending & message receiving logic, and the code is deployed the same everywhere.

The Zodiac Nomad module does not follow this pattern. It only implements receiving logic.

The contract you’ve described as the “Router” only implements sending logic - as such, I’d gently discourage using the term Router for that.

1 Like

Hello,
I’m just joining in, and I have difficulties following your language. In particular, I don’t understand which contracts are on which chains. I’m also not sure I undersand the:

My understand is, that when doing a cross-chain operation, such as executing a governance call, there is:

  • A caller (C), on chain #1
  • A receiver (R), on chain #2.

Am I correct in understanding that

  1. the router is on chain #1, its being called bu the caller (C), and it triggers “something”
  2. the remote is on chain #2, it receives cross chain messages, calls the receiver (R) and includes mechanisms so that R can figure out who C is?

In that sense, Router-Remote would form a bridge.

Or maybe I got it wrong, and Router and Remote are both on chain #1, with remote being the bridge entry-point and router being a discovery mechanism.

For the record, I believe that the main issue in that space comes from the different bridges having such different interfaces. IMO, the “low level” bridges should be very simple, with very few feature (they should not care about replay tickets for example), and all the features should be built in the userspace.

Here is an example of such a bridge, that can be be specialized for full duplex communication between polygon root chain mainnet and child chain

If such bridges were standard and widely available, we could build routers allow chainId lookup, and remote execution on any chain (for which a bridge is known)

1 Like

Yes, exactly! You nailed it.

Looking at your code, the IBridge contract would be roughly analogous to the “Router”:

interface IBridge {
    function sendMessage(address target, bytes memory data) external;
}

It’s just a matter of defining additional responsibilities for the receiver. By defining the receiver behaviour we can unify an interface across Nomad, the Polygon Bridge, and others.

I’m going to whip up a new EIP draft that captures what we’ve discussed so far, then create a new topic so that we can start a fresh conversation.

1 Like

The draft of the EIP for Cross-Chain Execution has now been opened as a PR. Once an EIP number is assigned I’ll create a fresh discussion thread for us to continue fine-tuning the spec.

I’ve added everyone in this thread as a contributor; but if you don’t wish to be included please dm me.

New Thread: EIP-5164: Cross-Chain Execution

2 Likes