Modification of EIP process to account for security treatments

Whats the point of having Security Considerations section in the EIP at all then?

1 Like

I’ve said all of this before in other threads, so please forgive me for repeating myself.

I object to including security considerations surfaced after a proposal becomes final within the body of the proposal itself. To be clear, I do believe that these security considerations should be published somewhere, just not within the EIP.

My objection stems from the question of who has the authority to determine if a security disclosure is worthy of publishing. Within the EIP framework that exists today, there are two choices: authors and EIP Editors.

EIP Editors are not expected to have any technical knowledge about the proposals they oversee. Given the wide range of topics we see, it would be basically impossible to maintain any meaningful depth of expertise. Further, it’s important that the EIP process maintain credible neutrality. We don’t want to be put in a position where we could appear to make a decision for ulterior purposes, like personal gain. Choosing to (not) publish a vulnerability puts us in that position.

Authors, on the other hand, should be technical experts on their proposal, but should they be the ultimate authority on what is/isn’t a vulnerability? Once a proposal goes to final, I like to think of it as belonging to the community (where before final, it belongs to the author(s).) A contrived example here is with ERC-223: the proposal as written cannot differentiate between an EOA or a counterfactual contract (i.e. CREATE2) that hasn’t been deployed yet; one might argue that this should be mentioned in the Security Considerations section, since it can lead to loss of tokens. While that example is relatively harmless, I can certainly envision a scenario where an author is incentivized to prevent publishing of a vulnerability to protect their financial interests.

So if authors and EIP Editors are poor choices, what can we do? I see two options:

  • Publish every security disclosure without vetting, or
  • Publish no security disclosures.

Personally, I prefer the latter option.

2 Likes

I completely understand your point and to address this I have pt.2 in my proposal:

2. Write an Ethereum Security Guideline - a set of rules that an application-level standard/program must adhere to. Violation of any of the Ethereum Security Guideline principles must be considered a red flag and indicated in the Security Considerations section of the proposal upon disclosure.

So, the EIP editors will not have to decide whether to publish a vulnerability or not. Instead EIP editors will review “appeals to indicate a violation of the security guideline in a EIP” and judge whether it’s valid or not - and its a much simplier task.

This is how a vulnerability disclosure will be done:

There is a security guideline that has a set of rules.

EIP-X: Security Guideline
Every secure software must do:

1. XXX
2. YYY
3. ZZZ

Someone pretends that EIP-123123 violates rule 1. He builds a contract, deploys it on testnet, simulates a scenario under which we expect the contract to do XXX but it is visible in the transaction history that the contract is doing something else. If someone can assemble such a precident artificially to demonstrate how EIP-123123 violates a rule of the security guideline - then we can add it to the security considerations section.

So basically its not the EIP editors who decide whether something is a vulnerability or not, its the rules in the security guideline EIP. If we can agree on this set of rules once - we can use it afterwards. And dedicated security experts can work on the development of the security guideline EIP. This is what I propose.

This raises the question of “what is the goal?”. We should keep in mind that we are dealing with financial software here. The cost of a mistake can be huge. It’s not like you get errors in your browser console and you can simply ignore most of them and be fine. If someone makes a single mistake in our area then it means someone else will lose funds.

There is a good illustration to what we are discussing here, a script that calculates the amount of “lost” ERC-20 tokens: ERC-20 Losses Calculator

In 2017 there were $16K lost due to known security issue in ERC-20 standard.
In 2018 there were $1,000,000
In 2023 there are more than $90,000,000 and the amount is growing exponentially because nobody cares and we keep our users losing money

So I would say that security must be a priority. I don’t see any problems with having Security Considerations section in the EIP list all the security disclosures just because they all need to be placed right in front of the implementers eyes. If it helps to prevent the exploitation of known-to-be-insecure implementations and save our users from losing $90,000,000 then its a reasonable decision.

I maintain that there is no way to write a list of guidelines that can be evaluated objectively. If there was, we would’ve automated it and solved computer security once and for all.

2 Likes

It’s not possible to write a “formal specification for every possible issue in computer software” I agree. But it is totally possible to write a guideline that describes 10 main principles of secure software development that if violated will inevitably result in the lost funds for the end user.

I’m not saying “we can solve all security problems with this proposal”.

I’m saying “we can prevent the most obvious issues with this proposal and save a lot of end users funds”.

Software security is not something new and revolutionary. It’s a well-developed area with few common well-known “standards” of what to do and what to avoid.

Software security is also not something abstract and inconsistent, like predicting the next market move. It has some strict basic rules that can be described.

If you google “Secure software development principles” and review few pages you will find out that they describe the same things.

Option 2.1: let CVEs do the security disclosures.

Could this be done as a new ERC type called “Security” which can then be potentially backlinked to other EIPs on the website?

This is similar to the approach @Dexaran has taken with ethereum/EIPs#7915.

Publishing a new EIP listing a proposal’s flaws is more compatible with our process, for sure, but still runs afoul of my core objection: I don’t want Editors deciding whether or not to publish a security vulnerability.

So far, the best options to me are:

  • We make a wiki, or
  • We defer to CVEs.

I’m more concerned about what we should do with ERCs upon vulnerability disclosures. Right now I’m talking about application-level standards.

There can be only two viable options if we want to prevent financial damage to Ethereum users:

  • Mark a standard as “insecure” without modifying the specification/reference implementation. Recommend using other standards in production.
  • Fix the discovered vulnerability in the original ERC.

If we decide to outsource vulnerability disclosures somewhere and declare “we don’t have to deal with vulnerabilities ourselves, let vulnerable ERCs stay unchanged” then it will inevitably result in financial damage to the end users due to KNOWN vulnerabilities. This is not a goal to pursue in my opinion.

I think this is a great direction, but a pure wiki editable by anyone seems very risky for security purposes.

Something like Discourse Post Voting could be a good option. Create a new category in this forum called Security Disclosures, in that category there should be a topic for every ERC, and all topics should have voting enabled. Each post can contain a security disclosure following a specific format. CVEs can be optionally linked in each post.

It should also be possible to retract an ERC if it’s found to be inherently or irreparably insecure. For this we could have a new Retracted EIP status, or the Withdrawn status could be reused. Alternatively, only the reference implementation of an ERC may be retracted.

1 Like

This still runs into my primary objection: who decides if a proposal is inherently or irreparably insecure?

1 Like

I don’t know the degree to which you’re joking here, but a buddy of mine has been working on a cool prototype that integrates ActivityPub into CVE reporting for the IETF SCITT group. In this guy’s proposed model, CVE reporting pipelines would create “posts” (events) in the fediverse for all that follow the appropriate accounts per codebase to be notified about and interact with/comment on. Imagine if we were having this conversation in the comments thread of an automated CVE post, hehe.

Not joking at all. And this workflow if it ships makes my argument a bit stronger.

I am a bit concerned about the low barrier to post a CVE, which is mostly busywork if not done through tooling like github. But that would be a signal that someone posting the security notification thinks its worth the effort of following the CVE process. And I am assuming there is interest and desire in the CVE community to keep it high signal so it doesn’t devolve into NNTP levels of spam.

That is what I am hoping for by offloading the decision of “what is a security issue” to another team, it is a team that has vested interest in making their signal of “this is a security threat” high. (and then the ordinals CVE hit…)

1 Like

Ugh it seems a tidal wave of generative-AI spam is already headed for CVE pipelines and FOSS maintainers alike:

I opened an issue on EIPIP about one possible way of publishing documents that update finalized documents modeled on how IETF RFCs do it:

It’s not a perfect solution but I think it might be enough for everyone to get what they want

Maybe… just maybe the answer isn’t to provide authority to specific authors and rather provide authorities to groups and roles. This way the people who fill the roles and groups can continue to provide authoritative answers to “who determines it’s a security issue” rather simply.

I wonder if any other groups have tried this before and it worked for them?

2 Likes

As I understand it, at least one editor actively doesn’t want the editors to have that role. IMHO that’s unfortunate, but apparent.

So the challenges are defining a group that would collectively take on the role, and keeping that group viable and trusted. This is hard. We’re talking about money, as @Dexaran noted, so there are incentives to behave badly as well as imperatives to make a real effort to get this right. (At least a moral one not to have people losing badly, and a self-interested one to make sure this ecosystem is trusted enough to succeed as a platform).

I contribute to a security specification for Solidity smart contracts that might be something of a model for a set of criteria to measure against, making the decision less about one person’s personal motivations and more objective. @SamWilsn is right that if this were easy to solve we would have automated the whole thing years ago - and we haven’t because it isn’t. Under all the turtles, we rely on consensus-building and honest response to challenges as a proxy for truth, the Ethereum way.

But some - an increasing amount - is automatable. There is also a lot that people will agree on even though they have disagreements of varying intensity. Having a mechanism that captures at least the bulk of that with a more opinionated stance than “DYOR :person_shrugging:” as @bumblefudge has proposed seem valuable. It still needs some humans to take some responsibility, and without understanding how we get that, I well understand @SamWilsn’s reluctance…

1 Like

Are the EIP/ERCs consider the same as a pseudo-legal deed? (read covenant without endorsement). Here are some SEPARATE suggestions as to what a huddle of lawyers have come up with over the centuries:

  1. For each and every EIP/ERC have append-only Corrigenda section - this reflects that the early specs might not have enough eyeballs or some quirky EVM implementation might have unanticipated outcomes (cf historical Intel hardware floating-point bug)

  2. Incise within the Security section an instrument for opt-in register of contracts to be notified in case there are future Deed of variations … this does shift away from the presumption that the EIP is treated as atomic doc.

  3. Have a separate opt-out registry (including originating authors) of contracts/people/orgs to be consulted if there needs to be changes, this is most flexible as could be either a revised later EIP or just some side-notes on implementation/operations. However this creates longer term technical debt but could also be useful for forks to evaluate/compare then reversion before the final call.

ECH appears to be a volunteer activity (correct me if wrong) so hiring experts would be a major change of ethos. On the other hand, with the recent split in EIP/ERC, there may be space for Application Specific Knowledge (ASK) domains in law, accounting/auditing, and secOps/cyberSec. Think of it as a functional constituency or a roster of volunteers available to call upon. However, it does bring up another thorny question of remuneration (having experts like lawyers on contingency is not cheap). 2&3 may be doable within github with a bit of one-off scripting.

1 Like