EIP-1559: Fee market change for ETH 1.0 chain

This is already defined: the parent_block of the fork block is assumed to have a base_fee of 1,000,000,000, and the first 1559 block will then have a base fee calculated off of this.

It is worth noting that I believe at the moment every client implements this incorrectly. :confounded:

Hi everyone. What if we use the basefee as a means to incentivize the miners to future hard fork activity?

Fee volatility is due to the fee bidding war. And it seems the progressively growing basefee would do the job well.

The focus is on where the basefee goes. Burning it seemed like a good way to keep the incentives of multiple parties since ‘nobody’ gets it.

But what if these basefee could accrue in a ‘bank’ and this amount is used for the miners(and validators) for the future protocol upgrades? The basefee accrued could be evenly distributed to the miners once the hard fork is done as the general ethereum community intended.

This could work as an incentive to align and organize the overall miners whenever there is a hard fork.

If there is any MEV opportunity available, it’s gonna happen anyway. So 1559 doesn’t fix this. Only slows it down.

How I see is 1559 isn’t mean to punish anyone or hurt anyone’s opportunity to, let’s say, profit. If the basefee, which is meant not to be used as a cause of deviation, does the job well for keeping the fee volatility within a predictable range, this could have more utility if it had other uses.

The protocol can mint coins freely, so if we need to incentivize some particular behavior in the future we can just mint to achieve that. It is generally better to keep each mechanism design separate and not co-dependent when possible.

Thanks for bringing this up, I think it should get a lot more attention in light of the many efforts in the community to lower the barriers to entry for users.

Hi everybody,
Sorry, I didn’t read the 373pages of comments before posting, but I thought this is important for u to discuss(apologies if someone already mentioned it before)

1-I know u r familiar with Tim Roughgarden report, but here’s another paper with Simulation studies (one of the 4 authors is from the Ethereum org)

From a fast reading of the paper, they emphasize:
A)-The impact of the pool eviction policy ( which I believe contradicts with their 1st assumption that a non chosen TX is considered just a new arrival for next block; what about analysis which will wait, which will increase Fee cap, which will donate secretly to miners & that’s a black market I don’t know if u discussed the prob of its existence even in bitcointalk which is pure POW I found someone asking about the donation details)

B)-The uniform distribution of fee caps in a given interval, so u know better than me if this is possible or users will tend to imitate each other to get a better chance by “going with the flow & doing what everybody do”???


“The previous convergence results critically depend on the provided thresholds. If the step-
size exceeds these bounds, then the base fee adjustment rule may lead to chaotic updates.
As mentioned above, these bounds depend on the number of arrivals, λ, and in the range of
valuations, w. If λ increases or w decreases, i.e., if the system becomes more congested or if the
valuations become more concentrated around a specific value, then the thresholds go down and
a given step-size may not be enough to guarantee convergence. In fact, as we will show, for any
step-size, there exists a (reasonably large) λ and a (reasonably small) w so that the dynamics
become chaotic.”
.

For more remarks, and also Tim Roughgarden replies to what is said about Deflation, check

Summary of EIP-1559, base fee dynamics, what to look for when EIP-1559 launches, and interesting question & answer session by @timbeiko @barnabe @MicahZoltu

I am reading EIP-1559 and come here to argue the point which was dismissed as absurd:

It’s absurd to suggest that the cost incurred by the network from accepting one more transaction into a block actually is 10x more when the cost per gas is 10 nanoeth compared to when the cost per gas is 1 nanoeth; in both cases, it’s a difference between 8 million gas and 8.02 million gas.

The price does not represent the cost incurred by the network. So this argument is irrelevant in the EIP discussion.

no one significantly gains from the fact that there is no “slack” mechanism that allows one block to be bigger and the next block to be smaller to meet block-by-block differences in demand

Miners gain.

The above two points do not contribute to the argument and they are invalid, so I recommend they be removed.


The EIP text uses prescriptive and descriptive language. Please incorporate RFC 2119 and use uppercase keywords everywhere as required therein. See EIP-721 as an example of using RFC 2119.

Specifically, a sentence like “Miners should still prefer higher tip transactions over those with a lower tip, purely from a selfish mining perspective” is ambiguous in a specifications document. Please consider to uppercase SHOULD and define as above.


I recommend removing the words “Ethereum’s long term monetary policy” from the EIP. Maybe use “Ethereum’s long term token quantity policy” or something else.

The Yellow Paper does not recognize Ether as “money” nor do statements made hitherto by the Ethereum Foundation. (There are mentions of “money” that is “on” Ethereum [Mainnet], and that is a different thing.)

Because “money” has a specific meaning, that specific meaning has relevance for regulations in many jurisdictions, and because of estoppel, please remove that incorrect/unnecessary part from the EIP.

I believe that is the point of the quoted text, to make it clear that at least some portion of the gas price doesn’t have to do with operational costs, but instead with competition for limited space. This could perhaps be reworded to be more clear about the point it is making though.

I think this point is worth keeping, but I agree it could use a rewording for clarity. I believe the point the quoted text is trying to make is that the Ethereum ecosystem and its users do not benefit from a lack of block elasticity. Miners (service providers) do benefit from inefficiency here, but our goal is to build the best system for our users, not our service providers.

I don’t think that RFC-2119 SHOULD is appropriate here. The quoted section is from the Security Considerations and it is not citing a recommendation, but instead is using the word should colloquially as a way of saying “we expect miners are rational, and the rational thing to do is prefer a higher premium”. (side note: we should fix the word “tip” in there, as it has been renamed to “premium”).

PR to fix tip wording: Renames `tip` to `gas premium` in non-normative content. by MicahZoltu · Pull Request #3657 · ethereum/EIPs · GitHub

PR to fix money wording: Remove request for regulators to shut down Ethereum by fulldecent · Pull Request #3658 · ethereum/EIPs · GitHub

@editor-Ajian is correct to point out that the basefee is essentially a tax on transactions. But the tax can improve welfare if it is a Pigouvian tax designed to correct for negative externalities of larger blocks. Also, one may argue that miners’ supply of block space is perfectly inelastic as long as they are compensate d for uncle risk, and that the risk is linear in gas used in the block. Hence, even if the level of the tax is higher than what is necessary to correct the externalities, it can transfer MEV from miners to the protocol (i.e. ETH holders) without causing much distortion or hurting the users of the protocol.

The fundamental issue with EIP-1559 is not that it introduces a tax, it is who the tax is levied on. Under the current format, the users who use the protocol after it has been congested pay higher basefees. As pointed out by @STAGHA, this is akin to making night drivers pay a higher toll because the road was congested during the day. This mistargeted taxation is unfair, causes congestion, and intensifies gas auctions. It fails to address the negative externality that a Pigouvian tax is supposed to internalize. In fact, I am a bit surprised that this was pointed out more than two years ago but has not led to a wider discussion.

When a block is congested, users of that block should be the ones paying the fee. Let me reproduce below my post on ethresear.ch that describes the problems of EIP-1559 in more detail and suggests a preliminary model for an optimal fee design.

The Problem

EIP-1559 will introduce a protocol fee on Ethereum transactions and allow the block size to be dynamically adjusted in response to congestion. Charging a protocol fee when the chain is congested is an efficient way to shift MEV from miners to ETH holders without hurting the users. Also, a flexible block size makes the allocation of block space more efficient. However, under the current fee structure, the wrong people can end up paying for congestion.

Suppose there are two blocks, and we target an average block size of one transaction per block. There are two users, Alice and Bob. Normally, Alice sends one transaction in Block 1, and Bob sends one transaction in Block 2.

Now, suppose Alice receives a shock and wants to send two transactions in Block 1. EIP-1559 allows her to do so; as long as she pays enough to compensate for the increased uncle risk, the miner will include both of her transactions in Block 1. This is great for Alice. However, because Block 1 was larger than the target size, the base fee is increased in Block 2. This means that Bob either has to pay the higher base fee or wait a block to send his transaction. Bob ends up paying for the congestion that Alice caused.

In general, when users congest a block, it is users of the subsequent blocks that pay for the congestion. This is undesirable for a couple of reasons.

1. It is unfair.
It is not fair that Bob should pay to allow Alice to send an extra transaction.

2. It increases congestion.
Because Alice does not care whether Bob pays more, she will congest her block whenever she has the slightest need to do so. In economics jargon, congesting a block exerts a negative externality on the users of future blocks. Because users do not pay for congesting their block, they will congest it too much relative to what is socially optimal.

3. It intensifies gas auctions.
Let us change our example and assume that Alice and Bob are competing to include their transactions in Block 1, which is expected to be congested. If Bob loses, he will not only have his transaction delayed, he will also pay a higher base fee in Block 2. So outbidding Alice becomes even more important. The same will hold for Alice, and as a result, the gas auction will become more intense. Users will pay larger tips to miners to avoid paying higher base fees to the protocol.

A Solution

I propose that when a block is congested, the users of that same block pay for the congestion. We can implement this by charging the miner a fee based on how much gas is used in his block. For example, a miner who uses $x$ gas in his block might be required to pay $f(x)=kx^2$ gwei where $k>0$ is some constant. The marginal cost of including one additional gas is $2kx$ gwei, so the miner will include all transactions that pay him at least $2kx+\epsilon$ gwei per gas until he reaches the block limit, where $\epsilon$ is compensation for uncle risk.

We can think of $2kx$ as a Pigouvian tax on block space. When the demand for block space is high, $x$ will be large (block size will be large), $kx^2$ will be large (the protocol fee will be large), and $2kx$ will be high (users will have to pay more to be included). Hence block space will adjust to demand, and we can calibrate the fee function $f(x)$ to target the average block size that we want.

Technical Asides
It should not matter in theory whether we charge the miner or the users. But charging the miner may be easier to implement.

To choose $f(x)$, we could use a supply and demand model of block space where the social cost of centralization risk is an increasing function of the average block size. We would solve for $f(x)$ that makes the agents internalize the social cost, while ensuring that users do not bear too much of the tax burden.

1 Like

Agree. Also, see this comment from @mdalembert which explains the outcome of your concern+gas tokens:

1 Like

Interesting. Perhaps this is one of the reasons why gas tokens are being removed in London upgrade.

Since this is in Last Call and finalization is imminent, it would be useful to ensure the spec does not contain any TODOs. It seems these TODOs have nothing to do with 1559 itself, so suggest to remove them.

The current 1559 spec still has this in validate_block():

    # TODO: verify account balances match block's account balances (via state root comparison)
    # TODO: validate the rest of the block

And the test cases section still only consists of TODO.

The TODOs in the python code are meant to indicate that around there is where a real client implementation would do those actions. They were never intended to be filled in in the specification as they are out of scope of this specification, but it is important to give context to readers as to how the changes described here integrate with a larger codebase. I’m open to other ideas on how to represent that.

I agree the test cases section should be removed.

Summary
The large oscillations in the block fullness (see Coindesk’s chart at bottom) was said in a Coindesk article to be caused by the new EIP-1559 base fee calculation. The goal is very much like the difficulty adjustment where the difficulty target has been replaced by the base fee and the solvetime is replaced by block fullness (both have a target they are trying to achieve). The EIP-1559 algorithm is the best possible for difficulty algorithms except for the very low setting seen in BASE_FEE_MAX_CHANGE_DENOMINATOR = 8. This would be pretty bad in difficulty algorithms due to resulting in similar oscillations (it’s like a simple moving average using only the past 16 blocks… see Karbo oscillations in next comment). It probably needs to be increased a lot. 30 would be a fast, tolerable difficulty algorithm and 100 to 500 would be a lot better. The 2000 that ETH uses for it’s similarly-functioning DA is probably way too large. The base fee calculation has a strong parallel to difficulty algorithms, but it’s hard to see how strong. Those paying fees are like miners on small coins who can come or go, amplifying oscillations. It’s not an exponential distribution, but the algorithm being used is durable for different distributions.

Details

The EIP-1559 code (see below) is called “WTEMA” by difficulty algorithm enthusiasts. It’s the same as ETH’s difficulty algorithm but not as staccato, mathematically inverted (for the better), and uses an N=8 blocks instead of N=2000.

Here’s the simple math of EIP-1559, ignoring rounding error:

GF = base gas fee
F = block fullness in parent block as fraction of parent's target fullness
GF = parent_GF * (1 - 1/8 + F/8)

GF goes up as F increases above 1, so in difficulty algorithms, the parallel is that we replace GF with target (aka 2^256/difficulty) and F is parent_solvetime / target_solvetime.

Except for N=8 being really low, this is close to the best possible algorithm. If N=8 is substantially larger than F, it’s equal to all of the following due to e^x = 1+x for small x where x = (F-1)/8

  • 1st order IIR filter. See [1] for this and next 2
  • Electronic RC filter (8-blocks) on how long it will take the error signal in the previous output sample to be applied on the same scale as the input signal.
  • Kalman filter if the target value is a scalar random walk.
  • Ideal EMA (exponential moving average) when applied to the exponential distribution of solvetimes in difficulty algorithms[3]. F=previous solvetime per target solvetime. GF = target = 2^256 / difficulty.
  • BCH’s new difficulty algorithm ASERT. (Lundeberg & Toomin, same result as above Imperial College paper)

You can use e^x =~ 1+x approximation to confirm it’s equal to the following.

Ideal DA EMA / ASERT in "relative" form:
F = previous solvetime per target solvetime. 
GF = target = 2^256 / difficulty.
GF = parent_GF * e^((F-1)/8)  

In difficulty algorithms, N=8 is insanely low and causes these kind of oscillations.

F does not look like the same units as blocks which is what I interpreted the 8 to mean. To correct this serious problem (especially for the RC and difficulty algorithm parallels that are based on time), F can be interpreted as “number of blocks” to get “1 target-blocks-worth of fees”.

EMA / ASERT are ideal for difficulty algorithms, but they have an exponential distribution. This seems to be important only in keeping it accurate for low N. This situation is probably not an exponential distribution so it may not be expected to work as well for low N, but for higher N, this algorithm is probably the best + simple that can be found.

EIP-1559 code:

		elif parent_gas_used > parent_gas_target:
			gas_used_delta = parent_gas_used - parent_gas_target
			base_fee_per_gas_delta = max(parent_base_fee_per_gas * gas_used_delta // parent_gas_target // BASE_FEE_MAX_CHANGE_DENOMINATOR, 1)
			expected_base_fee_per_gas = parent_base_fee_per_gas + base_fee_per_gas_delta
		else:
			gas_used_delta = parent_gas_target - parent_gas_used
			base_fee_per_gas_delta = parent_base_fee_per_gas * gas_used_delta // parent_gas_target // BASE_FEE_MAX_CHANGE_DENOMINATOR
			expected_base_fee_per_gas = parent_base_fee_per_gas - base_fee_per_gas_delta

From Coindesk article:

1 Like

Karbo coin’s difficulty had oscillations very much like the current block fullness situation as a result of using an almost identical averaging period (aka “mean lifetime”). N=16 block averaging in a simple moving average is the same “mean lifetime” as EIP-1559’s EMA algo using N=8.

While I’m not opposed to changing the divisor, keep in mind that the primary goal (IMO) is user experience, which means strong guarantees that we don’t have many consecutive full blocks (because then people end up with stuck/pending transactions), and a secondary goal of minimizing the base fee change per block.

For a first attempt, I’m actually very happy with 8 because we see multiple full blocks in a row very rarely so far (though this may change as people switch to 1559 transactions), and 12.5% increase per block isn’t too high. We may be able to raise the denominator a bit without increasing the chances of many consecutive full blocks too significantly, but we need to be careful about breaking the primary target of “low probability of consecutive full blocks”.

Also, I don’t think we should do much judging until end-users are widely using 1559 transactions. With most users using legacy transactions we should expect much higher rate of oscillation than if all users were using 1559 tranasctions.

Oscillations are a lot less efficient due to area under the curve being less for a given sum of the y values. Average base fee per block and average miner or staker bandwidth (computation and communication) requirements are (much?) higher due to the oscillations. Anyone needing to process and relay blocks ASAP (<1 second) has to have 2x the bandwidth for 1 or 2 blocks and then let the higher bandwidth sit idle for 2 or 3 blocks. Higher bandwidth requirements should cause higher gas prices, not just delayed txs.

The parallel between this and difficulty algorithms is strong. This seems more susceptible to oscillations because user can not only delay txs (like miners jumping from one alt coin to the next), but they also must come back at some point to send their txs (they don’t have an alternate coin to “mine”).

I spent a long time working on the difficulty oscillation problem in alt coins (see my github issues). The awful Karbo difficulty chart above with N=8 was my first algorithm. The solution was to make the averaging time very long, like N>100, and to remove any caps. Maybe there is a reason you can’t have blocks that are 10x bigger (or costly). In difficulty, the parallel would be a huge miner jumping on for “cheap” blocks. But they are not cheap because it’s a fair, long-term “average” cost. Removing the cap isn’t as important as increasing “averaging” window length, but it prevents the base fee from rising as fast as it should when there’s higher demand that the algo can’t see (due to the cap). The average block size (or computation gas) should be as easy to target and achieve as the average solvetime in difficulty algorithms.

Software might be developed to competitively exploit low fees. It’s a zero sum game, except if everyone uses the best software, no will get lower fees than others and it will reduce the oscillations (giving higher efficiency due to efficient competition). The net effect is if EIP-1559 had used a longer averaging window.

Since value is burned, the longest POW chain is not the sum of difficulties, but the sum of:

difficulty * (1+ base_fees/reward)

If this isn’t adjusted for, security is reduced (at least from what it could be).

Those willing to wait for txs to go through should have a lower base fee in exchange for waiting longer. EIP-1559 seems to try to assist in that goal. Here’s an idea to do it that builds on EIP-1559. Assign 1/4 the desired avg gas per block (avgGPB) to 889 out of every 1000 blocks and allow every 10th, 100th, and 1000th block to have gas per block targets of 10/4, 100/4, and 1000/4 times avgGPB. I don’t know if motivating 250x the normal gas every 1000th block like this is OK. Each of the 4 tranches has an independent base fee per gas calculation using the current formula which is,

BF = prior_BF * (1 - 1/8 + F/8)

but based only on their particular tranche’s parent block’s F = “fullness fraction”:

F = tranche_parent_block_gas / tranche_gas_target.

tranche_gas_target = avgGPB * traunch / 4

traunch = 1, 10, 100, 1000.

A much better and complicated way is to equally share total base fees in the tranches instead of equally sharing total gas (over their respective intervals). It would greatly lower the really high tranche_gas_targets constant above by making it a variable. I haven’t worked out how to do it.

I said oscillations are inefficient because they require a higher bandwidth to “stop and go”. In other words, the average bandwidth required of the nodes (if they have to get the block out ASAP) is higher for a given amount of gas “per 100 blocks” if the bandwidth (gas per block) is changing up and down. But having the slower-tranche txs available ahead of time allows the bandwidth to be spread out, i.e. the bytes can be propagated and validated ahead of time.