EIP-ProgPoW: a Programmatic Proof-of-Work

Is the fnv1a change related to this discussion? https://gitter.im/ethereum-mining/ethminer?at=5b1a1af1144c8c6fea7d40ca

The functions like ROTL32, clz, popcount should be specified together with cases that are undefined behavior in C.

Please clarify the Keccak hash function params.

You stated that the bitrate is 448, so the output size is 176 (!) because 800 - 2*176 = 448. But the actually output was truncated to 64-bits. There is no padding (I have to read that from the implementation!). So the name should look like keccak_f800_176_64_nopadding().

Maybe just name this weirdo keccak_progpow?

Edit: in other place 256 bits are taken from the Keccak state (while by the spec the output has only 176 bits). Is this allowed by the Keccak spec?

The implementation of keccak_f800 takes a header hash and interprets it as an array of 8 32-bit words. This will give different results on big-endian architectures. See https://github.com/chfast/ethash/pull/79.

The EIP states:

If the program only changed every DAG epoch (roughly 5 days) certain miners could have time to develop hand-optimized versions of the random sequence, giving them an undue advantage.

I’m curious what kind of optimization would only be possible by hand here. Do you have any examples?

As I mentioned on AllCoreDevs #62 I feel there is a potential change in the EIP ProgPow that must be done before deployment. This stems from a concern voiced by Vitalik on AllCoreDevs #60. I forwarded my concerns to Hudson so he could forward it to the team conducting the audit, but since there has been discussion of deploying ProgPow without a finished audit then this issue and possible remediations should be discussed outside the audit.

In short, at the transition block, 1/3 (or less) of the ethash hash power could be used to stall the progression of the chain. There are one of two things we could change to fix it.

Since ProgPow hashes produce 50% (or less) of the number of hashes per device than Ethash the total difficulty would rise at a slower rate, 50% or less. To mount this attack the byzantine Ethash hashers would then focus their efforts on publishing new pre-fork blocks with higher and higher difficulties. Because the byzantine actors produce hashes at twice the rate the pre-fork block could have higher total difficulty with less effort and honest miners may then re-org to the block just prior to the fork. Emissions are irregular so this may not be able to hold things off forever, but in essence a 33% device pool could mount a 50% hashrate attack aimed at stalling the chain. This is different from a normal 50% attack in that one generation of blocks has their difficulty measured differently than another generation, and the prior generation can be manipulated.

I see two alternate mitigations.

  1. A “difficulty multiplier” can be applied to the ProgPow blocks when calculating total difficulty. Either 2x to account for the twice as hard memory access or a (much?) larger multiplier to give heavy weight to progpow blocks.
  2. A finality gadget, or on-chain checkpoint. This would be like the beacon chain finality gadget except it would be driven by a multi-sig contract. However that raises governance issues as to who signs the contract and what hash is chosen. With the beacon chain it is economic interest driving the selection.

Now I may not have my head fully wrapped around uncles and the modified GHOST implementation that was once in the spec, which is why I want people more versed in the mining process to weigh in.

4 Likes

Nice idea. So essentially, a group of miners would keep ‘polishing’ an old block, to get people to keep reorging to that one?
I don’t think that’s sustainable.

Let’s assume they have 100% hashpower. After 14s, they find a good enough block. After 14s more, they find another good enough block, that may or may not be better. They can’t add these two difficulties together, which is why a chain that actually progresses (and adds block difficulties from N blocks) will always beat one that stands still and just polishes a block.

However, a separate concern is:

  1. At fork-time, let’s assume 25% drops off (asics).
  2. Also, the difficulty at fork-time will be too high, and needs to adjust. Before adjustment, it will be less ROI to mine ether, and perhaps better spent somewhere else. So let’s assume that another 25% drops off due to bad ROI.

This leaves us with 50% of the hashpower, trying to mine on a chain where the “tuning” between hashpower and diifficulty is 4x (2x for the dropoff, 2x because progpow is harder to mine). This leads to an even longer period before this imbalance will settle.

Remember – miners do not compete directly with eachother, so other miners dropping off does not help the ones remaining (in the short term), they compete with the difficulty threshold.

I haven’t checked how long time it will take before the difficulty can re-adjust.

One way to solve this problem can be to, at fork block, add a division by 2, so that for that particular block, the difficulty is divided by two. This would instead lead to a short period where it’s more lucrative to mine ether, and would have the opposite effect (which would be good), and not drive miners away from the chain at the forkblock.

Also, the “period of imbalance” will adjust faster if the imbalance is in favour of the miners than if the imbalance is in the other direction (faster as in wall-time, not number of blocks)

It may not be sustainable, but the optics of a rough fork would be a net negative.

What they are looking for is not another block good enough by the old difficulty, but a block “twice as good” - so it will take 28s on average to find. And then a four times as good block, and then an 8 times as good block. You don’t add the difficulties together of the new old height block you keep looking for ones that exceed the TD of the forked ProgPow chain. If it’s not past the re-org horizon clients should take that one as canonical.

The risks of the “hairy fork” comes in with the miners, if they take these new higher TDs to mine their blocks off of then we get multiple competing heads.

@OhGodAGirl has discussed this issue several times. It’s about 3 hours. And the impact on the ice age is that it moves closer 2 weeks per halving.

A solution I presented at AllCoreDevs Berlin, the one time difficulty adjustment. This won’t address the slower growth of total difficulty under ProgPow. The slower TD growth isn’t a problem if we can avoid the “hairy fork.”

Here’s the AllCoreDevs Berlin presentation I did on falling hashrate and my slides

And it’s 3 hours for 50% drop off, 6 hours for 75% dropoff.

log(0.5)/log(1-2/2048)*15 seconds = ~3 hours
log(0.33)/log(1-2/2048)*15 seconds = ~4.6 hours
log(0.25)/log(1-2/2048)*15 seconds = ~6 hours

@holiman - after driving home I think you’re right, it’s not as bad as I imagined. While the attacking group has a higher hashrate polishing the pre-fork block they only get to keep the best block found. Over time they will have better and better blocks, but the new chain gets to keep all the work it’s found, it doesn’t have to discard their old work like the attack block would have to.

So 33% of old hash couldn’t stall the fork, but it could significantly slow it down. The slow down would be worse if the new chain had to “burn down” it’s target difficulty, especially if we let the difficulty burn down naturally. So a one time difficulty cut of 50% at the fork block would allow the new chain to grow at the same block per minute rates as before if the same hardware was pointed at the chain.

A one time difficulty adjustment also would reduce the impact to normal users, regardless of the potential stalling efforts of old hash power.

Yes, exactly! (that’s what I meant by “can’t add these two difficulties together”). And I didn’t mean it to sound like that “separate concern” was my idea, I know it’s been floated around before, but wasn’t sure whom to attribute.

Actually, @shemnon, this is a false conception. Difficulty does not work like that. What you’re talking about is what could be called “the true difficulty”, which is the combination block’s hash with it’s nonce.

What Ethereum uses as difficulty is a function of of the parent’s difficulty and the time. Under the hood, we then check that the true difficulty is above the threshold.

So basically the total difficulty is the sum of all “threshold difficulties” (not the sum of all “true difficulties”). So it’s not possible to “polish a block” indefinitely. The difficulty for block N is already given by block N-1, but if you re-mine block N-1 with an earlier timestamp, you can get a higher difficulty for block N (but only higher by a certain amount).

You only get credit for what is the threshold? No wonder my toy blockchains rarely had tied blocks.

I theory you could start far enough back to juice the difficulty increase. But it almost instantly pushes it out of the 50% ongoing hash threshold, even if the new blocks grow slower. And since you can have at most one block per second it does put an ultimate cap on it, no matter how far back you go.

So yea, not an issue. This analysis would have been better two months ago.

Why is a controversial change being shoved through without awareness of the community? Why does @OhGodAGirl have so much sway over the governance process?

“Without awareness of the community?” That is an assertion the evidence does not support.

There were two twitter polls, a carbon vote, a tennagraph vote (both coin and gas weighted), and a miner vote. All of which came back in varying degrees of yes to strongly yes. Is there some community we that should be polled for sentiment that those would not have reached?

“and a miner vote” Wait, the people who stand to benefit from rigging out their competition are in favor of it? Ya don’t say!

“Is there some community we that should be polled for sentiment that those would not have reached?” Yes, the vast majority of investors who don’t bother to take part in your tiny, non-representative, socially engineered votes specifically designed to create a false sense support for something no one wants. It’s a controversial change being pushed through by a hijacked governance process. If this goes through it means that Ethereum has been officially hijacked.

She, like many other authors of EIP proposals, created a proposal with a reference implementation and went through the process of gaining support by talking about the issue on All Core Dev calls. The client developers all decided there was sufficient technical benefits to implementing the proposals, so the client developers integrated the change. There was significant benchmarking and testing done over the past year since the proposal was created, and an audit is currently in process, the end result of which will hopefully show the algorithm is technically sound and meets it’s intended goals (to the best abilities of the auditors).

No where in this was there a deviation from the governance process, which means no “hijacking” occured. Please note that while it is often helpful to the developers to gauge sentiment of the community in this process to inform their decision to implement, it is ultimately their own decision of what work should be integrated. ProgPoW seems to have sufficient amount of community support where it has made it to this stage. At the end of the day, the full nodes govern the rules of the network, the developers only give them the tools to do so.

You may vehemently disagree with that conclusion, and you are free to voice that disagreement. Ultimately, a decision will be made if/when a fork should be proposed, which I believe will only contain ProgPoW. I would actually be most in favor of a soft fork approach (with a threshold of over 90%), as that allows the community to expressly show it’s final opinion through the number of full node clients who enable this change, which is the most “democratic” option we have available to us in a decentralized system with no identity layer. If it worked for Bitcoin, it should work for us.

@fubuloubu
wow your post was so amazing, it motivated me to reply:

She, like many other authors of EIP proposals

The author of EIP 1057 is a close business partner of Calvin Ayre and Craig Wright. Needless to point to her bitcointalk trust page and many other pages, it’s all fitting.
(there are plenty of links in this other thread, I won’t post them again. DYOR)
https://ethereum-magicians.org/t/progpow-audit-delay-issue/3309

talking about the issue on All Core Dev calls

Together with two anonymous people (Mr. Def and Mr. Else). The naivety of the core devs to not even check that, let alone question any of the narratives brought forward, will remain as a lesson how not to do it.

The client developers all decided there was sufficient technical benefits

That says a lot about their understanding of both ASICs and mining economics. The hardware audit will show (actually several independent audits have shown already), that ProgPoW’s promised “asic resistance”, lately framed as “closing the efficiency gap” does not exist.

There was significant benchmarking and testing done over the past year

All of it nice Nvidia marketing material. They didn’t even bother to change their bar charts to make them look more “community like”. It was enough to make the core devs believe, so well done!

and an audit is currently in process, the end result of which will hopefully show the algorithm is technically sound and meets it’s intended goals

Of course not. We can expect the software audit to mostly look at the algorithm as a cryptographic algorithm, as we have seen with the four RandomX audits.
The PoW-part of the algorithm is a hardware assessment. The hardware audit will show that the promised benefits (“1.2x instead of 2x”) do not exist.
The one effect ProgPoW has is from the PoW change itself, that’s very disruptive and benefits the large farm of the EIP 1057 author whose contracts with Nvidia we don’t know. A PoW change is like an ICO, it’s fitting that the people behind ProgPoW have a deep ICO history.

which means no “hijacking” occured.

It was hijacked from the beginning, and persists until today. The motivation behind EIP-1057 is entirely different from what is stated in the EIP text. Welcome to the real world.

ProgPoW seems to have sufficient amount of community support

What was actually measured in these votes - hashrate, distinct human beings, mining pools, capital?
Since the votes came out largely in favor of ProgPoW, what does this mean about centralization? Wouldn’t a large majority, in some cases 100%, say something about the state of centralization?

On the day this proposal was made (2018-05-03), Ethash hashrate was 265.97 TH at a profitability of 6.22 US cents/MH/day. Happy old days!
Today, Ethash hashrate is 178.83 TH at a profitability of 1.51 US cents/MH/day. (numbers from bitinfocharts.com)
There probably were never more than a few TH of ASICs on Ethash, and Ethash ASICs haven’t been on sale for a year. I did math to walk through mining economics in the other thread.

So if ASICs played a small role in Ethash 15 months ago when EIP 1057 was launched, and are not economical to sell since then, why is there continued pressure to switch to ProgPoW urgently?
You cannot think about this hard enough.

Will ProgPoW accelerate centralization?

I would actually be most in favor of a soft fork approach (with a threshold of over 90%), as that allows the community to expressly show it’s final opinion through the number of full node clients who enable this change, which is the most “democratic” option we have available to us in a decentralized system with no identity layer

It would be far healthier for the Ethereum ecosystem to uncover and investigate the background story of ProgPoW:

  • Why is the “anti-asic” effect less than promised, if supposedly so many “experts” from Nvidia and AMD were involved. Is it an error in judgment, or is there some other story going on?
  • Who are Mr. Def and Mr. Else? If they are Nvidia employees and Nvidia was trying to exclude Bitmain, Samsung and others, what else does Nvidia plan?
  • Are Mr. Def and Mr. Else engineers, or marketing people?
  • Is it acceptable that fully anonymous people participate in major Ethereum decision making processes?
  • If the authors of ProgPoW are anonymous, what does this mean in terms of copyright or patent claims?
  • Why is the EIP-1057 author working with Calvin Ayre and Craig Wright, and what does this mean for ProgPoW?
  • How many TH Ethash does Squire/Core Scientific control today?
  • What are the contracts between Nvidia and the company of the EIP 1057 author, as well as Squire (Calvin Ayre/Craig Wright company)?
  • Is it possible that Nvidia sells chips at a discount to the EIP 1057 author in return for excluding competitors?
  • Does ProgPoW help with decentralization, or help with centralization?

I think the attack from Nvidia and partners (Core Scientific, Squire) is sophisticated, the largest corporate attack on a coin ever.
The Ethereum Foundation needs support in that they managed to at least put order to the process by bringing in independent auditors for both software and hardware.
Too bad noone can audit the contracts of the company of the EIP 1057 author…

The EIP-1057 author is already trying to discredit the hardware auditor (“I do have some concerns that someone who has not built crossbars for GPUs will be doing a hardware audit on the ability to build a crossbar in an ASIC - but given the lack of choice due to CoI this seems fitting.”), but hey, she is a hard worker.

Looking forward to the audits! Hope Least Authority and Bob Rao crush some of that dark corporate stuff.

I hope I am wrong but based on Bob Rao’s credentials, I’m not expecting a lot from the audit. He is a silicon manufacturing and process engineer. Mr Bob Rao doesn’t seem have any GPU’s, SIMD or crypto ASIC design experience. Without some deep expertise in this areas, the results of the audit would be suspect.

To support my observations, I point to the following information from some web searches:

The only patent I could find in Mr. Rao’s name is DIE WITH INTEGRATED MICROPHONE DEVICE USING THROUGH-SILICON VIAS from 2014 which is unrelated to GPU’s, SIMD processing cores or crypto ASIC design.

Here’s his bio from an article in EE Times from year 2002.
He has progressed to become an Intel Fellow and the director of an analytical and microsystems technologies at Intel’s technology and manufacturing group. He is responsible for directing the development of advanced analytical tools and methods for microprocessor performance characterization, silicon debug and yield enhancement. Rao also directs Intel’s microsystems and MEMS research and development activities.

I don’t see how the results would be suspect, but it may be underwhelming. Based on his experience, I am sure Bob is smart enough to work through things from a first principles perspective and see if anything smells fishy. Sure, he might miss some nuanced thing that only those with deep expertise might know, but all things considered the audit should be quality enough to tell us if anything is obviously wrong about the implementation that conflicts with design goals, which is the point.

I am sure if the audit comes out clean, we will hear all sorts of grief about this, but I would take the results seriously instead of disparaging his credentials. I think it was very difficult finding an experienced person without any sort of CoI in this matter, so this may indeed be the best we get.