This is a great point of conversation about expectations!
For me, I believe you should be testing out those mechanisms as well as you can through simulations, user behavior studies, and testnet trial periods (backed by real ETH, if incentivization is desired) before putting that code on the mainnet. You could do this before or during a security audit.
Upon a successful audit, you can move this alpha test to the mainnet, where you should gradually increase your risk as you gain confidence in the proper operation of your contracts. You can achieve this through “envelope expansion” that you personally shoulder the risk for. This means that the amount of valuable assets you handle through your framework should be limited in some way to this comfort level based on a risk assessment, and only increased as you feel comfortable with it’s proper operation. It should only grow beyond your personal capability to shoulder the risk when you have achieved that level of comfort in it’s proper design throughout the range of capabilities you’ve designed for. I call this the “public bug bounty” portion of your contract system (isn’t this what all smart contracts are right now?).
This is where I think insurance would be handy, as a means for shouldering that risk and providing an assurance that you are capable of repaying your users in case an unintended event occurs. Over time, insurance will naturally limit the envelope as I was describing.
Upgradability should not be a shortcut for this process, you still have to do the majority of your effectiveness and design considerations up front, with upgrading your contract to be considered only in a normal improvement cycle if a vulnerability is found later or new functionality is to be added. You should have a plan of action for this, how often you will check for new vulnerabilities and how you will handle them if discovered. Auditing should definitely be a part of this upgrade process as well, with new features receiving a full audit. (small changes should at least be peer reviewed, and vulnerability patches should receive enhanced attention)
We should get much more into this and discuss what a maintenance/support plan looks like. This might include something like instructions for white hats who discover vulnerabilities (and maybe even a continuous bounty program for privately communicating vulnerabilities). These should also be disclosed to the wider group of developers and auditors for assessment of existing contracts.
I definitely think this is worth discussing further. The concerns here are that there must be a sufficient separation between the auditor and the developer or the relationship will not be effective. There is nuance here that we can work between, but one thing I’ve mentioned previously is checkpoint events along the development cycle, where auditors are invited to review plans for implementations before going too far in the design process, to avoid making costly mistakes early on in the design process. This is similar to the waterfall or V model of S/W Engg, which (unlike agile) focus on getting things right earlier before expensive layers of testing and audit happen (which mitigate even more expensive “events”).
I think this is one of the main questions we want to answer at our gathering: how do we ensure more consistent and robust security?