EIP-1559: Fee market change for ETH 1.0 chain

Hey - this is the first rule in software engineering. You do not solve problems that do not exist. The opposite is called overengineering.

You release first, and then iterate.

1 Like

Well, one needs evidence for the word SOME. Evidence can only be created by vote. Maybe it is a LOT. Without a vote the argument is just aggression (there are many of us and few of you)

Miners are the only guys who actually did vote. The opposite party is using a very typical escalation of violence argument (if you do not agree, we will punish you).

In addition to being aggressive (I think Micah’s blog says “we need to aggressively …”), the argument is weak because miners can fork the network anytime and hire engineers to maintain it.

There is nothing rocket science about maintaining geth, lets be frank. It is a highly skilled software engineering job.

The problem with authoritarian systems based on power is that they are always better than democracy short term but always fail long term.

It is good miners had this vote, now we know who thinks what.

May be the core team carries out an anonymous vote to voice their opinion?

Anonymous is obviously the must because many people are employees, and the opinion may be against the employer’s opinion.

Do we have a proof that every single geth contributor supports the EIP. Why not let them express their opinion freely?

What ETH needs is a governance system where all parties are democratically represented and can vote. Otherwise, there is no conflict resolution.

2 Likes

Hey @barnabe, it should certainly be possible to modify some of those simulation notebooks in order to answer the questions I was asking. Unfortunately as-is they only seem to suggest that adopting the canonical EIP-1559 bidding strategy gives the user certain advantages over sticking to legacy transactions under the assumption that we do switch to EIP-1559 (and even that doesn’t seem 100% clear from your results since one can see legacy transactions using the low-cost gas oracle get a slightly lower price than EIP-1559-aware users, despite them being forced to pay for the base fee like anyone else).

In order to answer the question of whether EIP-1559 is going to improve things overall one would have to repeat the simulation without the imposition of a base fee, and then compare the results based on some more or less objective cost/benefit metric – And there is already quite some potential for disagreement trying to find the right metric, because users on a tight budget (like me :wink: ) are likely to place more weight on their average transaction cost, while novice users with a low risk tolerance are more likely to prefer more deterministic transaction confirmation times even if it comes at the cost of higher gas fees.

In addition it would be beneficial to simulate alternative scenarios to EIP-1559 (like the banking and moving-average redistribution of base fees other people have been requesting, which I agree would have the benefit of making the inflationary/deflationary behavior of the currency more deterministic). In particular I have a hunch that even a slight increase in the block gas limits would yield a greater improvement in average gas costs than this proposal, and a reduction in confirmation times that could get close to what this proposal achieves for most users thanks to the increased network throughput, simultaneously improving both metrics I was discussing above instead of trading one for the other.

Yep, that’s very true, but it’s also true that real gas oracles use algorithms substantially more sophisticated than what has been used in the simulations I’ve had the chance to read through so far. In order to obtain a fair comparison of this proposal vs. the status quo it seems necessary to model some of the mempool dependency of the current oracles at the very least, and I have the suspicion that that would already reduce the stickyness you have observed in your results to some extent (though I agree that it’s unlikely to fully disappear, but that might be acceptable if despite the increased stickiness users get lower transaction fees overall!).

Another interesting experiment to try out (roughly converse to your modelling of legacy users in an EIP-1559 network) would be to model the present protocol in combination with a less sticky gas oracle consisting of an off-chain implementation of the base fee, taking advantage of mempool information in order to make base fee adjustments instead of relying on the gas utilization of the double block. If that truly leads to a less reflexive gas price estimator users may have a personal incentive to opt in, avoiding the controversial aspect of imposing a base fee on everyone (However it would certainly increase the risk that those users would have to resubmit transactions compared to the present proposal).

Yeah. I think it would be beneficial to have some quantitative understanding of how optimal the selected averaging parameter actually is. Many people seem to think that this proposal will reduce the frequency of large gas price surges thanks to the doubled block size, and that it will improve the user experience by moving away from first-price auctions as pricing mechanism. What is not so clear in the various explanations of EIP-1559 is that there is a necessary trade-off between both of the promised advantages, that can be quantified with the averaging weight of the base fee. It might be challenging to find the right balance between both because, once again, more budget-oriented users are likely to benefit from a slower average while more risk-averse users are likely to prefer a faster average. Any idea what sort of metric the proposed value of this parameter was intended to optimize? (Assuming that it is the result of some sort of quantitative analysis)

2 Likes

These are excellent points, thank you!

So when legacy users do get a smaller price (compared to other 1559 users who appeared at the same time), it is at the cost of a longer delay, which I guess is fine if you don’t have time preferences and just want the transaction to get in, sooner or later. There is also an asymmetry in the behaviour of 1559 users, they are modelled as “take it or leave it”: either the current basefee allows their inclusion or they balk. But they could be modelled as sending their bid anyways, with maxfee lower than the current basefee, in which case I might expect the results would look closer to legacy users (they wait in the pool until basefee reduces and miners include them).

Yes this is the change that was on my mind. Right now the legacy users live in a 1559 environment, so it’s not a true comparison between the current paradigm and a 1559 future (but that wasn’t necessarily the point of the notebook, I was more interested in this effect of blending legacy and 1559 users in this new 1559 paradigm).

The metrics are hard but it’s also hard because there are a lot of factors to control for between the 1559 environment and the legacy environment. Answering “would I be better off in that scenario vs the other one” is probably possible in aggregate (like how I check the difference between expected and realised types of users included in blocks), but not individually (like “is user X better off bidding that way in legacy or the same way in 1559”). There is likely still interesting things to dig through.

Right, but this seems orthogonal to 1559. We assume block gas limits are constraints (weak ones, since miners can tune them up or down) and do our best to have a predictable fee market in these constraints. Maybe you are asking what is the net effect of having 1559 (with or without the banking/redistribution) and moving gas limits. One effect might have to do with the dynamic behaviour of basefee and its update rule. Taking the “control theory” view, having flexibility over the block gas limit gives an extra parameter to tune should basefee be oscillating for instance.

Agreed! In this notebook I was more interested to extract the “stylised fact” than faithful reproduction, I didn’t want (nor do I think it is possible) to say “1559 will decrease stickiness by 37%!”, but finding that tradeoff between average gas prices and stickiness (which you could get by taking reference from mempool for the oracles) sounds very possible (or at least isolate the factors are play)

Intuitively I want to believe that it’s not possible to will a basefee into existence without some sort of binding constraint. The fee burn ensures it is not open to manipulation for instance. I see how you could construct basefee from mempool data (basically you’d get a noisy estimation of the demand curve at T so you could infer the “market price” from it) but couldn’t it be easily gamed then? I don’t have an attack in mind but it feels like you are getting too much for free! Interesting…

We’d have to dig the historical records but my intuition is that the 1/8 value of the update rate parameter was chosen with respect to how long it takes for basefee to 2x or 3x (I know it’s sort of tautological…) But no one believes it is the “correct” parameter, or even the correct update rule. Roughgarden formalised the space of possible update rules and there are many variations to consider. One could even have some sort of control on the update rate itself (~gain scheduling). Empirical data would be useful to justify anything, so the plan seems rather to launch as is and decide on updates down the lane. Upstream we’ll be doing some theoretical work on how to think about these tradeoffs, so that it will be more of a “fit the data” problem hopefully.

Usually, people are required to provide evidence to prove a claim as opposed to dismissing a claim for a lack of evidence, but I’d point at the history of Eth so far. The closest you would get to manipulation is pools including their payout txns on blocks for 1 gwei, but that is a far cry from the idea of a consistent pressure caused by filling blocks with junk transactions.

Again, EIP-1559 is not trying to reduce gas prices. That is not a goal of this EIP. There are a number of efforts underway to reduce gas prices (increase supply of block space) but that is orthogonal to this EIP. If your goal is reduced gas costs, I encourage you to support and offer assistance to people building those solutions like rollups and sharding.

EIP-1559 is not trying to prevent large gas price surges. I don’t know who is thinking this, but please send them links to resources that help them better understand the purpose of EIP-1559 and what it solves!

3 Likes

Oh I hadn’t noticed this asymmetry it would be interesting to see if you get substantially different results without it.

Yeah, that’s certainly useful in its own right, it’s good to make sure that people will have an incentive to switch to a new system before it’s put into production. :slight_smile:

I guess it might be possible to game it even though I can’t think of any attack vectors off the top of my head either. The perverse incentive to manipulate it seems somewhat reduced though since people would be able to stop using it if it starts returning garbage values at some point :wink: . Anyway I didn’t intend that as some sort of fail-proof oracle aiming to solve all of our gas estimation problems, but rather as a practical experiment in order to better understand whether the stickiness we observe in current gas price oracles is the result of some essential limitation of the underlying gas pricing mechanism or whether it’s something we could improve by coming up with better estimation algorithms.

I feel like we keep talking past one another on this one… I understand that your final intention here is not to reduce gas prices, but the point I was trying to make in the paragraph you quoted is that it’s probably worth making some sort of quantitative comparison (in terms of whatever usability metric you do care about here, like variability in the transaction inclusion delay) between this proposal and a massively simpler one that simply increases the per-block gas supply by a small amount. If the latter gives you a comparable benefit and also reduces gas prices across the board maybe it’s worth doing just that by decree, since it doesn’t entail nearly as many risks as this proposal and it makes a larger share of the community better off. Right?

That’s fair. But whew, there is so much disinformation in this regard that I doubt I can dispel this myth on my own without the help of someone more influential from this community, more than half of the results I got when I searched the web as I first heard about this proposal hinted me in the wrong direction. Even Tim Roughgarden implies this will mitigate high transaction fees in section 3.2.2 of his article. And I think he’s right – With an appropriately chosen base fee averaging parameter.

If you dislike looking at the trade-off I was referring to as one between your ability to mitigate the variability of transaction fees and the usability benefit you’ve been arguing for (of being able to bid for your marginal utility), you can look at it roughly equivalently as a trade-off between reducing transaction delays (which according to @timbeiko’s “Why 1559?” seems to be an explicit goal) and the same usability benefit you were talking about: If the base fee averaging is so fast that it can go through the roof from one block to the next you’ve lost most of the promised elasticity benefit of double-size blocks: users may end up waiting unpredictable amounts of time even though they had set a cap that seemed just fine based on the demand they had just observed, or they may end up hugely overpaying if they had dared to set the cap according to their true marginal utility…

Increasing block size is a temporary solution as usage is likely to climb once gas prices decline, as usability attracts usage, and so even if we have a period of less-than-full blocks (gas prices at marginal opportunity cost for inclusion), it is likely that we would return to “competition for block space” at some point after that. Also, block size increases come with the rather large problem of state growth, which is a major problem with Ethereum right now. It could very easily be argued that a block size decrease would probably make sense for the long term health of Ethereum.

Check out A Tale of Two Pricing Schemes. A User Story | by Micah Zoltu | Coinmonks | Medium (the linked section specifically) on why gas price estimation is an inherently hard problem (it is chaotic). There certainly may be room for improvement, but I don’t believe the problem is fully solvable just via better oracles as even the best theoretic oracle we could build today would not be perfect due to the number of inputs into the system (all human users of the system at every moment in time).

And that’s great. Right? You will have made the network accessible to even more users. Resulting fees will still be lower (or equivalently since that’s what you seem to care about the most: delays will still be lower at a given price point) since otherwise those new low-marginal-utility users you have attracted won’t have any reason to stick around.

I agree that the cost of state growth is a serious problem that affects most blockchain networks currently in production, but like any cost it can be quantified and folded into the equation I was referring to as “whatever usability metric you do care about here”. The experiment still makes sense if we consider that additional term, right?

Vitalik’s article “Blockchain Resource Pricing” has some interesting discussion about the marginal social cost to the network as a function of the block size, and apparently he concludes that the curve is relatively flat in a neighborhood of the block size values commonly used today. Ironically this assumption lies at the very core of EIP-1559: At the promise that it will achieve better economic efficiency, and within the assumption that it’s acceptable to double the allowed block size overnight (the indirect market incentive introduced to try to keep that from happening is no guarantee that there won’t be substantial state growth – But maybe that’s okay for the time being until Ethereum 2 is fully functional).

So as far as I can tell if you turn out to be right and even a small increase in block size leads to an unmanageable explosion of social cost, this proposal is inevitably flawed, too.

I agree that this is a really hard problem, but my intuition also tells me that if you have been able to construct a more efficient gas price estimator on-chain one should also be able to replicate that estimator off-chain with at least similar properties. Why not try to verify that experimentally?

1 Like

There still seems to be a misunderstanding about the purpose of EIP-1559. Its purpose is not to lower gas prices for anyone, nor to speed up transaction inclusion at a given price point. Its purpose is to make it so a user can reliably (within some bounds) get a transaction included, and remove the need for users to participate in a first price auction (which is a notoriously difficult auction type to be good at).

We aren’t trying to attract low marginal utility users with EIP-1559. We are trying to attract users who care about simplicity and usability. Removing the first price auction mechanism allows users with high marginal utility for block space who care a lot about simplicity and usability to utilize the platform.

If the dev team’s plate wasn’t so full I would lobby to lower the current block size. That is to say, in my opinion the block size is already way too high at the moment, and there is no room (in fact negative room) for additional increases to block size.

Every analysis I have seen and done indicates that state growth (over sufficient time periods for averages to work out) does not change with EIP-1559. If you believe there is a flaw in the logic that leads to this conclusion I would definitely like to hear it. So far I haven’t heard any complete/sound arguments that state growth will increase overall with the introduction of EIP-1559.

Many people, including myself, have tried to solve this problem. If you have a better solution than what is out there I strongly recommend either building a tool that utilizes your solution, or proposing the solution to the community (e.g., on ethresear.ch). All of the block price estimation tools out there are experimental attempts to address the gas price estimation issues, and they all fail in one way or another so far.

1 Like

The most valuable thing that miners will have is the tx order in blocks. It’s extremely valueable for Defi.
In the current system, users can fight to be #1 in the blocks by outbidding each other transparently through the mempool (if miners do not trade themselves). I would not be surprised if this becomes an non-transparent market and defi people can bribe miners off chain for a specific spot in the block which can be asked on top of the tip_fee where the “tx_order_fee” is an offchain bidding market, - essentially getting back the ~30% income decrease which was introduced by EIP-1559.

EIP-1559 will encourage miners to decrease transparency.

Miners will fill base_fee 0 blocks with their own tx’s (miner payouts) and indirectly push the tip_fee higher … which is what they could do now as well. However, with 1559 this is their only way to not pay any of the base_fee.
The incentives to include other tx’s into base_fee 0 (and other low base_fee) blocks are very little (fee_tip), the complete opposite of what is happening now.

If miner’s instead would receive a piece of the base_fee, things look differently.

1 Like

Miners will fill base_fee 0 blocks with their own tx’s (miner payouts) and indirectly push the tip_fee higher … which is what they could do now as well. However, with 1559 this is their only way to not pay any of the base_fee.

If the base fee is 0, then that means that demand for block space is low at the moment. That’s when time-insensitive users, including miners, should be pushing txs. Imo this is the system working as intended. In practice though, a non ghost chain probably will rarely or never have zero base fee.

Mining pools will need to payout at some point, if the costs of the base_fee become to high it will outweigh the incentives to not empty blocks and will encourage them to drive the base_fee to 0.

This seems unlikely to me for three reasons

  1. Losing a lot of tips
  2. Gradual transition to base_fee zero requires prolonged miner collusion
  3. Gas optimized payouts should be able to make this a non-issue (not totally sure about this)
1 Like

Payouts need to be paid at some point. The tip_fee is independent from the base_fee, so driving it to 0 makes sense depending on the amount of payouts that are queued up…

I wasn’t trying to change your mind about the purpose of EIP-1559, nor convince you to attract low marginal utility users except possibly as a side effect, I was just trying to show the contradiction in your argument that increasing supply by a finite amount is useless – It will still have a lasting impact on how reliably users can get transactions included (hopefully I got right what you want this time) at any given price point, which might be comparable to the benefit obtained by this proposal according to the same metric. And if it is, isn’t it worth considering that alternative seriously given that it’s a massively simpler proposal which will also incidentally benefit low-marginal-utility users?

Reducing block sizes would hurt a lot of users. If the necessity to reduce block sizes is as urgent as to justify such a compromise then the worst thing that could possibly happen now is a proposal lifting the block limits to 2x the current values…

I don’t think it’s fail-proof, like most market incentives trying to steer people away from doing bad things. I can think of several potential externalities completely out of our control that could lead to violent price movements (not necessarily of ETH/USD itself but even of other assets relative to ETH) which could disturb that feed-back loop by shifting the equilibrium base fee by orders of magnitude.

E.g consider what could happen if you do declare a nuclear war ( :wink: ) on the miners and they truly form a cartel and retaliate by taking advantage of their majority hashpower in order to do something really bad, causing the price of ETH to plummet due to the markets losing confidence in it. This would momentarily make the base fee really cheap in absolute terms and give people a strong incentive to sell off, suddenly making most blocks double-full and further decreasing the value of the currency, possibly to the extent that the price drop neutralizes the effect of the hard-coded base fee increase, allowing people to continue filling up blocks to 200% for an extended amount of time. At the end of this downwards spiral a new equilibrium price may be reached, at which point blocks will return to 100% full, but there is no guarantee that it will be followed by a period of <100% full blocks, since there is no guarantee that neither the base fee nor the currency it’s denominated in will ever retrace their way back to their original values, leading to a net state growth larger than the present one.

And no this doesn’t mean that I’m pessimistic nor “bearish” on Ethereum, I’m just trying to understand how this market feed-back mechanism would behave under large unexpected price movements (call it a form of risk management).

Incidentally state growth isn’t the only reason why I think that the steep dependency you are depicting between block size and social cost would imply a serious flaw in this proposal: As you can see from page 19 of Vitalik’s “Blockchain Resource Pricing”, the key requirement for the present proposal delivering an economic efficiency improvement is C’’ < D’’ (or the absolute value of the slope of the marginal social cost being strictly lower than that of the demand function). If you’re right and there is a catastrophic increase in social cost anywhere close to the current block size value then we are way better off setting a hard cap at whatever level of supply we can consider sustainable, rather than relying on market feed-back mechanisms to regulate it.

1 Like

Payouts need to be paid at some point. The tip_fee is independent from the base_fee, so driving it to 0 makes sense depending on the amount of payouts that are queued up…

I think one of us is misunderstanding how the EIP works. My understanding is that base fee is driven down when prior blocks are unfilled. In order to artificially reduce it, it would require miners to not include transactions which could have been so. In this way, they are losing out on tips that they could have obtained.

E.g consider what could happen if you do declare a nuclear war ( :wink: ) on the miners and they truly form a cartel and retaliate by taking advantage of their majority hashpower in order to do something really bad, causing the price of ETH to plummet due to the markets losing confidence in it. This would momentarily make the base fee really cheap in absolute terms and give people a strong incentive to sell off, suddenly making most blocks double-full and further decreasing the value of the currency, possibly to the extent that the price drop neutralizes the effect of the hard-coded base fee increase, allowing people to continue filling up blocks to 200% for an extended amount of time. At the end of this downwards spiral a new equilibrium price may be reached, at which point blocks will return to 100% full, but there is no guarantee that it will be followed by a period of <100% full blocks, since there is no guarantee that neither the base fee nor the currency it’s denominated in will ever retrace their way back to their original values, leading to a net state growth larger than the present one.

This is like saying, “consider in a nuclear war, fishery stocks could fall really low”. This is kind of besides the point. If drastic changes occur in the ecosystem, then obviously a lot of assumptions go out the window.

Heh, maybe you don’t consider the rather dramatic scenario I was using as example to be particularly likely (I don’t know what to think anymore after seeing the mutual threats of escalation around the internet…), but I guess you agree that drastic irreversible price movements happen all the time in the cryptocurrency world, and under such conditions the rate of state growth would significantly deviate from the expected one for this proposal. I don’t personally consider that to be a big problem unless @MicahZoltu is right and even a small increase in the rate of state growth would pose an unmanageable problem to the network, in which case we should probably prefer one of the alternatives that are able to deliver predictable state growth, right?

1 Like