Testing is a thankless task for developers. It’s a job that real testers like to do.
Nick Sawinyh @sneg55 09:06
More work and coordination for miners.
It’s not a big deal actually for most of mining pools.
Peter Salanki @salanki 13:00
+1 on Nicks comment. Updating node software for mining pools is very painless. Takes 5 minutes for me
Greg Colvin @gcolvin 14:34
Then we are down to the mob, which I don’t care about, and the lack of testing resources.
+1 Lack of testing resources.
Top developers and engineers know to own their code. They know that the engineering includes testability, including all the code for testing, keeping tests reliable, efficient, maintainable, extensible, etc.
Yes, other eyes and engineers are required for producing robust code, but the culture needs to be more on ownership rather than on other people/testers to find the bugs or security issues.
There are engineers that come from such cultures but usually they are very comfortable in their big company jobs.
I’m not writing this to argue but to clarify your points further. +1 to a lack of engineers on fronts like testing, security, performance…
Over the years I’ve seen many patterns on the relationship of development and testing. But when I have worked with professional test engineers I’ve found that they are simply much better at it than I am, as you would expect.
When I have been sole owner and tester of my code I can do fine. I spent three years writing one system that manifested only one bug in the next seven years in production. But I’m more productive when I can concentrate my efforts where I am most skillful, and let test engineers concentrate their efforts where they are most skillful.
This is independent of big vs. small company. The best test engineer I ever worked with was at a tiny startup. But it’s true that no big company I worked at failed to have professional test engineers on staff. Lots of them.
What’s been hardest here is maintaining ownership in an open-source environment that lets anybody on earth make PRs.
So one of the questions at 1x Berlin related to the preferred cadence of 1.x network upgrades. 6 month vs 4 month was the point where consensus broke down. As an advocate of 6 month the big advantage I see for client developer is that it will keep the clients focused to one upgrade at a time. With a 4 or 3 month cadence there will start to be some overlap between the upgrades before they are “done.”
Here’s a chart I put together with some cheesy placeholder names for future upgrades that illustrates the overlap.
EIPs is the deadline for EIPs to be considered.
Clients is the client development soft deadline.
Mainnet are respective hard fork dates. These landmarks are presuming we keep the same Istanbul landmarks for future network upgrades.
6 month Cadence
4 month Cadence
With a 4 month cadence we need to lock in EIPs prior to a successful launch of the previous upgrade. And the hard fork window is the same time period where devs would need to work on client implementation for the next version. If something goes bad on the pending upgrade the next upgrade will be severely impacted as well.
Because of this mainnet launch and next upgrade overlap that I think anything less than 6 months is a higher risk cadence than we should be looking to take on.
Makes sense. Though I would say that most of this is based on the our “conventional” way of doing things, like:
- Assumption that people working on the EIPs are the same people who are developing major clients - this does not need to be the case. Working Groups are the attempt to change that.
- EIPs are often under-researched when they hit the roadmap, and there is a large uncertainty of their implementability (effectively EIP process is used as a research process). If EIPs are higher quality and properly researched and prototyped, the risk of them being thrown out and causing knock-on effect are lower
- Test preparation is a fairly centralised process
Challenging these assumptions is something that would have benefits regardless of whether the cadence is increased. And therefore, it is something to strive for
@shemnon thank you this is great! I still have to figure out how to pull this into a summary so we can gather everything — thanks for doing “homework”.
@AlexeyAkhunov I think we can move to a more continuous / faster cycle, it will take time for people & processes to adjust rather than time to get work done!
Essentially “kanban” where EIPs are pushed through to ready and then bundled into releases.
To formalize what I said on AllCoreDevs on 26 April, for @AlexeyAkhunov’s point 1 if a fork goes poorly, like Constantinople did (with both testnet and mainnet problems) the same devs that would implement the change for the next network upgrade would be the same people fixing the problematic deployments of the previous network upgrade. If things go well you are correct they will not be the same people working them.
If we get to a point where things are going smoothly at 6 months (both Istanbul and Asaigo) we can re-evaluate for 4 months (with Brie and Cheddar or Cheddar and Derby overlapping). But both Byzantium and Constantinople were not what I would consider smooth. Alexey’s points 2 and 3 are the indicators we will need to see to consider a 4 month cycle.
And to be clear, these cheesy names are placeholders until we get more formal ones. But when talking about particular network upgrades in the future we need some mnemonic that implies monotonic order.
A couple of things on my mind for Asaigo -
- Create a new “EIPs Proposed” checkpoint two to three months after kickoff. 6-7 months before mainnet.
- Change the “EIPs” Proposed checkpoint to “EIP ready.” 4 months after kickoff and 5 months before launch.
This new checkpoint will be when we close the door to new EIPs for the upgrade, rather than 5 months before upgrade it will be 6-7 months. This will then give time to discuss and hash over the proposed EIPs for two to four all-core-devs calls. Anything that gets kicked out of the previous upgrade (such as EIP-1283 was) would automatically be considered as proposed.
EIPs don’t need to be well formed, they just need to be communicated as proposed with some reasonable bounds. Like “My fee market proposal as per my talk at Conference Y” or “State Fees steps W, X, and Z.”
The existing 5 months to go mark the EIPs proposed would need to be “ready” - i.e. someone would have been on ACD to champion the EIP, ACD had discussion on it on the call or in FEM (or some other appropriate forum) and the EIP would be in a condition it could go into last call, accepted, or final. The idea being it is ready for client implementors to implement without concern for it being incomplete.
The 3 month (client implementations soft deadline) and 2 month (Testnet launch) checkpoints would remain the same.
|Nov 2019||EIP Ready|
|May 2020||EIP Ready|
|Nov 2020||EIP Ready|
Future cheesy placeholder names:
‘Danbo’ - Oct `21
‘Edam’ - Apr '22
‘Fetta’ - Oct '23
‘Gouda’ - Apr '23
‘Hoop’ - Oct '23
‘Infossato’ - Apr '24
This will get us 5 years out, which is what, two major bear markets?
I suggest to use Devcon locations past Istanbul:
- (devcon 0: berlin?)
- devcon 1: london
- devcon 2: shanghai
- devcon 3: cancun
- devcon 4: prague
- devcon 5: osaka
After 2.5 (?) years that would mean we need to have more devcons or switching naming convention
There was a joke told at CoreDevsBerlin that we should auction off the naming rights. I’m not convinced that is a bad idea.
a) Names must be flavorful and meaningful. A proposal should include the meaning. e.g. the “lima bean” upgrade would be rejected without some meaning.
b) Names cannot be the names of existing or known future projects or companies. e.g. the “Ethereum Foundation”, “Parity”, and “PegaSys” upgrades would not be valid names.
c) The name should be relatively benign and non-controversial. I would give examples of objectionable names but I’m pretty sure we all can think of some.
Why auction? The funds could be used for paying for AllCoreDevs meeting venue fees and catering/paying for 1-6 meals during these meetings.
As a suggestion to reduce confusion, we should use protocol version numbers with “Unnamed” or “TBD” if we don’t yet have one, and still keep the version number once we know the names.
v8 - Istanbul
v9 - TBD
v10 - TBD
If you restrict what can be a name severely, then who would sponsor a generic random name? I’m not convinced it is a useful idea.
That’s fine but the longer people refer to them, the more likely they are going to be sticky. Who wants sticky cheese?
I was actually going to file one to get started and not leave submissions/reviews to the deadline (as with Istanbul).
I would want to avoid to avoid stuff like the “Sony Playstation 5” upgrade or a hate-speech derived fork name.
You will notice the alphabetical progression, that’s more important than the cheesy theme for the placeholders. We could call it the “a” upgrade as a placeholder, but the state rent proposal already has the letter space taken.
I was hoping we could wait until the kickoff checkpoint for the next meta so we are only juggling one mutable network upgrade document at a time. It also (currently) lines up with the client implementation checkpoint so stuff can easily move from one list to the next.
I’m thinking that perhaps we should just use numbers for future network upgrades, instead of names. That should paint the bike shed once and for all. This aligns with EIP-1846 that wants to move the reference tests away from names and towards numbers upgrades.
So ‘asiago’ would formally be ETH-009-00, and informally ‘version 9’, ‘brie’ would be ETH-010-00 and informally ‘version 10’, and so on.
Cross-linking this for context based on today’s AllCoreDevs call:
- Istanbul testnet upgrades on Sept 4th vs. Aug 14th
- Istanbul may be split in two upgrades
- We need to agree on what makes sense for the next fork after that
Proposed this forACD#68.