Proposed gathering of the Security community

I am as well. One of my projects is aiming to use zkSNARKs to handle transactions of a sensitive token.

However this conference will not focus on software design ideas, only how we ensure developers write secure smart contract code.

1 Like

Wiki and more organization happening here:

1 Like

Hi all! My name is Yondon and I work on smart contract dev at Livepeer. We just deployed the first version of our smart contract system to mainnet a few weeks ago (https://github.com/livepeer/protocol) with plans to make improvements over time.

In addition to the topics already mentioned by others in this thread, I’m also interested in discussing how security requirements and audit processes can be scoped to take into account the stage of a system. For example, what mental framework do others use to determine the minimum level of security required to deploy an alpha version of the system in order to start testing certain assumptions about human behavior (given that a lot of the contracts that developers deploy will be mechanisms that assume some behavior in response to the rules of mechanisms that will not be observed in practice until they are live) knowing that the system will be upgraded potentially often in the future? Given that this is likely going to become a more common case in the future, can we transform the relationship with security auditors such that they become on-going partners as opposed to providers of one time stamp of approvals? These questions are motivated by a broader question which is what are the best practices that can enable smart contract developers to get some of the benefits of iterative development while still upholding a high level of security?

Definitely interested in attending this gathering and happy to contribute talks/content wherever appropriate!

2 Likes

This is a great point of conversation about expectations!

For me, I believe you should be testing out those mechanisms as well as you can through simulations, user behavior studies, and testnet trial periods (backed by real ETH, if incentivization is desired) before putting that code on the mainnet. You could do this before or during a security audit.

Upon a successful audit, you can move this alpha test to the mainnet, where you should gradually increase your risk as you gain confidence in the proper operation of your contracts. You can achieve this through “envelope expansion” that you personally shoulder the risk for. This means that the amount of valuable assets you handle through your framework should be limited in some way to this comfort level based on a risk assessment, and only increased as you feel comfortable with it’s proper operation. It should only grow beyond your personal capability to shoulder the risk when you have achieved that level of comfort in it’s proper design throughout the range of capabilities you’ve designed for. I call this the “public bug bounty” portion of your contract system (isn’t this what all smart contracts are right now?).

This is where I think insurance would be handy, as a means for shouldering that risk and providing an assurance that you are capable of repaying your users in case an unintended event occurs. Over time, insurance will naturally limit the envelope as I was describing.

Upgradability should not be a shortcut for this process, you still have to do the majority of your effectiveness and design considerations up front, with upgrading your contract to be considered only in a normal improvement cycle if a vulnerability is found later or new functionality is to be added. You should have a plan of action for this, how often you will check for new vulnerabilities and how you will handle them if discovered. Auditing should definitely be a part of this upgrade process as well, with new features receiving a full audit. (small changes should at least be peer reviewed, and vulnerability patches should receive enhanced attention)

We should get much more into this and discuss what a maintenance/support plan looks like. This might include something like instructions for white hats who discover vulnerabilities (and maybe even a continuous bounty program for privately communicating vulnerabilities). These should also be disclosed to the wider group of developers and auditors for assessment of existing contracts.

I definitely think this is worth discussing further. The concerns here are that there must be a sufficient separation between the auditor and the developer or the relationship will not be effective. There is nuance here that we can work between, but one thing I’ve mentioned previously is checkpoint events along the development cycle, where auditors are invited to review plans for implementations before going too far in the design process, to avoid making costly mistakes early on in the design process. This is similar to the waterfall or V model of S/W Engg, which (unlike agile) focus on getting things right earlier before expensive layers of testing and audit happen (which mitigate even more expensive “events”).

I think this is one of the main questions we want to answer at our gathering: how do we ensure more consistent and robust security?

1 Like

Hello. What do I need to do to get my name listed on the participant list?

1 Like

Your name will be added. Welcome. Do you have specific interests?

Upon a successful audit, you can move this alpha test to the mainnet, where you should gradually increase your risk as you gain confidence in the proper operation of your contracts. You can achieve this through “envelope expansion” that you personally shoulder the risk for. This means that the amount of valuable assets you handle through your framework should be limited in some way to this comfort level based on a risk assessment, and only increased as you feel comfortable with it’s proper operation.

I definitely agree that leveraging simulations, user behavior studies and testnet trial periods are crucial before hitting an alpha release on mainnet. I’m curious what you imagine envelope expansion would look like in practice or if you have any examples of well executed expansion processes in mind.

Upgradability should not be a shortcut for this process, you still have to do the majority of your effectiveness and design considerations up front, with upgrading your contract to be considered only in a normal improvement cycle if a vulnerability is found later or new functionality is to be added.

I agree that upgradable contracts should not encourage developers to skip over executing a thorough design process before implementation/deployment. Although, I do think centralized upgrade mechanisms could also be another way to naturally limit envelopes. During an alpha, centralized upgrade mechanisms enable more efficient bug fixes and improvements, but at the same time also drastically change the trust model of the system. As a result, perhaps users would be more hesitant to allow the system to handle as many assets early on. This is like a “training wheels” mode for the system during an alpha (this is what we are currently doing for Livepeer) and once the system reaches a more mature stage, the upgrade mechanism might be executed via a more decentralized process.

This is similar to the waterfall or V model of S/W Engg, which (unlike agile) focus on getting things right earlier before expensive layers of testing and audit happen (which mitigate even more expensive “events”).

While this model has the nice property of avoiding expensive “events”, I do also see the downside in executing on expensive layers of testing and auditing up front only to realize that certain assumptions were wrong and that the deployed mechanism, while logically correct, does not achieve something valuable or meaningful.

Although the agile model of development might not work for smart contract development given that it represents a much different paradigm than other forms of software engineering, I’d be interested to see if we as a community can nonetheless borrow some of those ideas around validation of assumptions and combine them with the more robust security practices that might be found in the waterfall model of development. Perhaps the idea of checkpoint events you mentioned could be a starting point.

I think of upgradability in the same vein as governance:

This could be a migration over to DAO control of the upgrade process through user voting. The developer would be an important signal in this vote, but ultimately control is left to participants.


The idea is to limit risk at each stage of experimentation. What is the specific goal of each stage, how is it successful, what future stage does it enable? How can I prove this in the least risky and most useful way possible to prove my goals? How much actual money needs to be in this system to be proven successful? Similar to:

A point I would make with this is that there should be a more structured way of identifying whether the mechanism will work with much smaller stakes (i.e. before an audit or on a smaller codebase). It also goes into the upgrade process as you work on your platform and determine the mechanism at play is not sufficient to achieve the goals that are set. I think my ultimate point is to have a better plan up front for validation of assumptions in smaller steps than the full-on application dealing with the largest amounts of value. You are hinting at this as well in this quote:

This is very much the right tradeoffs to be considering, in my opinion. How do I create a plan of action that is flexible enough to respond to changing conditions as I learn more about my application, yet robust enough to respond effectively to those very same changing conditions under the unique demands of the Ethereum network?


The reason why I’m asking more questions is that I think the answer is unique to each different application. The real answer, in my opinion, is better planning and documentation so we can do a better job communicating with others of our own intent so we can get excellent feedback from the largest possible amount of knowledgeable people to be successful.

And also be extremely clear about how much money actually needs to be on the line to prove success, otherwise it will be more money than you anticipate, more so than your risk model will support.

The idea is to limit risk at each stage of experimentation. What is the specific goal of each stage, how is it successful, what future stage does it enable? How can I prove this in the least risky and most useful way possible to prove my goals? How much actual money needs to be in this system to be proven successful?

Agree that these are all crucial questions to ask. In my opinion, more concrete examples of what this type of envelope expansion looks like in practice with specific applications would also be helpful reference educational material for developers. As you mentioned, the right process will likely look different from case to case, but just having a set of reference points that demonstrate what secure staged experimentation for applications might look like could teach people a lot even if the approach you choose to take ends up differing a lot in the end. Perhaps one of the short presentations at this gathering could be focused on trying to walk through the design of these stages for one or more examples with the subsequent discussion focused on extracting key techniques and common themes?

That is a great suggestion.

@RexShinka, @rpavlovs, and I are working on Development guidelines next week with SecurEth. We intend on developing a structured walk through very similar to what you have described as a good example of what a process looks like as a learning experience. We will definitely be presenting and asking for community feedback on these guidelines at the September event, so it is good to hear this sentiment echoed.

Common themes and key techniques is something we need to discuss during the event and codify for our community in an extremely accessible way.

1 Like

Hi guys,

i’m Johann from parseclabs.org. We are working on Plasma chains for gaming. I’m doing research into limitations of smart contracts execution on Plasma and would love to share my findings in terms of security and safety implications at the event.

1 Like

Personally would be very interested in that! Working on several Plasma variations and interested in the general case of smart contract execution.

If smart contracts can be executed similarly on the plasma chain as on the main chain (e.g. without any explicit tie except the native tokens linked to the plasma token), would that allow something useful enough to a subset of the network to allow more use-cases?

would that allow something useful enough to a subset of the network to allow more use-cases?

The simplest set of use-cases I see are Satoshi-dice like applications that do conditional routing of funds, or group and funnel payments, without taking custody.

Our interest at Parsec Labs is to enable games where funds can be held in custody, and the fair execution of game rules can be verified on chain.

The holy grail would be to allow more Plasma bridges that validate sub-chain, as described in the original Plasma paper.

Until now I can say the the latter 2 are probably not possible without additional constructions like a PoS set of notaries, or similar, due to the data availability problem.

Hi David from bloctrax here. Would love to join as a participant for the event. Wrote this blog post about our philosophy of a smart contract security audit in case anyone is interested and hasn’t seen it yet. https://medium.com/@bloctrax/philosophy-of-a-smart-contract-security-audit-1e111efd28cb

Good Morning -
Patrick MacKay from Runtime Verification, Inc. We provide formal verification - protocol verification and code verification - of smart contracts. Our founder and CEO, @grosu, already expressed our interest in attending the event. I’d like to confirm my intention to attend. I look forward to seeing everyone in early September. Do we have any specifics on date and venue as I’d like to book my travel. Thanks in advance for your help. - Patrick

Hey Patrick, see this post for the venue and other information. It is confirmed.

https://ethereum-magicians.org/t/wiki-gathering-of-security-community

Still working on schedule

We have created a Smart Contract Security focused discourse to talk further about related topics.

Please direct all further conversation of this topic (the security Unconf before ETH Berlin) to this thread in the SecurEth discourse: https://discourse.secureth.org/t/ethberlin-security-unconference-agenda-post-topic-here/

More general discussions of security topics should be directed to the overall discourse as well.

2 Likes

Humbly suggest that the community doesn’t need yet another discourse. Security is important and there’s no need to create another discourse with a different set of administrators. Thanks

3 Likes