I think at this point pretty much anything other than a two-line change in the style of EIP-4488 or 4490 is just too complex and not going to make it for that reason alone. The whole point is to make a quick-and-dirty solution because rollups need it fast fast.
The longer-term solution (or I guess medium-term solution) is to implement the beacon chain sharding spec (which is not that complicated) and only turn on 1-4 shards so we get some dedicated 2 MB space per block with its own efficient fee market and because there’s only a few shards nodes can still fully validate availability and we don’t have to worry about p2p networking.
Assuming 300 is the stipend per block, transactions with <= 300 calldata won’t provide extra room for transactions that use > 300 calldata.
I would say the main issue with this approach is that it would reduce the average-case maximum without reducing the worst-case maximum, so it risks decreasing total scalability.
I am trying to understand why the so-called “multi-dimensional complexity” problem associated with this proposal only applies to block producers. Don’t users and wallets also have a potential issue here, as they need to analyse the two dimensions when setting the tip/fee?
Technically only if the tx has more than 300 bytes, and even then if they keep setting a low priority fee their tx would just float around for a few extra blocks until a block that’s <25% full comes along (which happens quite frequently). Use cases that involve txs with a really huge amount of data (primarily rollups, but also contract creations) may require some special logic eventually, but only if block sizes start actually regularly getting to the calldata limit.