Ethereum’s surprising decline in usage suggests the network solved the wrong problem with the Fusaka upgrade.

Ethereum activated the Fusaka upgrade on December 3, 2025, increasing the network’s data availability capacity through blob parameter overrides that incrementally expand the blob target and maximum.
With two subsequent adjustments, the target was increased from 6 blobs per block to 10, then 14, with a maximum limit of 21. The goal was to reduce Layer 2 rollup costs by increasing throughput of Blob data, a compressed bundle of transactions whose rollups are published to Ethereum for security and finality.
After three months of data collection, a gap between capacity and utilization was revealed. MigaLabs analysis of over 750,000 slots since Fusaka activation showed that the network did not reach the target blob count of 14.
Central blob usage actually decreased after the first parameter tuning, and blocks containing more than 16 blobs had a higher miss rate, indicating reduced reliability at the edge of the new capacity.
The report’s conclusions are direct. Blob parameters will no longer be increased until the high blob miss rate normalizes and demand for the headroom already created materializes.
How Fusaka changed and when it happened
Ethereum’s pre-Fusaka baseline, established through EIP-7691, set a target of 6 blobs per block (maximum 9). The Fusaka upgrade introduces two sequential Blob parameter override adjustments:
The first was activated on December 9, increasing the target to 10 and up to 15. The second was activated on January 7, 2026, increasing the target to 14 and up to 21.
This change did not require a hard fork, and this mechanism allows Ethereum to adjust capacity through client adjustments rather than protocol level upgrades.
MigaLabs Analytics, which published reproducible code and methodology, tracked blob usage and network performance throughout this transition.
Despite the expansion of network capacity, the number of central blobs per block was seen to decrease from 6 before the first override to 4 after. Blocks containing more than 16 blobs remain extremely rare, occurring between 165 and 259 times each across the observation window, depending on the specific blob count.
There is unused headroom in the network.
One parameter discrepancy: the report’s timeline text describes the first override as increasing the target from 6 to 12, but the Ethereum Foundation’s mainnet announcement and client documents describe the adjustment as 6 to 10.
We use the parameters from the Ethereum Foundation as sources: as of 6/9, 10/15 after the first override, and 14/21 after the second override. Nonetheless, we treat the dataset we report on observed utilization and error rate patterns as an empirical backbone.

The larger the number of blobs, the higher the miss rate.
Network reliability, as measured by missing slots, which are blocks that fail to propagate or do not prove correctly, shows a clear pattern.
When the number of blobs is small, the baseline miss rate is about 0.5%. When a block reaches more than 16 blobs, the failure rate increases from 0.77% to 1.79%. At the maximum capacity of 21 blobs introduced in the second override, the miss rate reaches 1.79%, more than three times the baseline.
The analysis breaks this down by blob count from 10 to 21, showing a gradual performance degradation curve that accelerates beyond the 14 blob target.
This performance drop is significant because it means that network infrastructure such as validator hardware, network bandwidth, and proof timing are struggling to process blocks at capacity.
Eventually, as demand increases to either fill the 14 blob goal or move toward a maximum of 21 blobs, the increased failure rate can translate into meaningful final delays or reconfiguration risks. The report frames this as a stability boundary. Although the network is technically capable of handling high blob blocks, doing so consistently and reliably remains an open problem.


Blob Economics: Why Minimum Price Matters
Fusaka has not only expanded capacity. We also changed Blob prices via EIP-7918, which introduces a reserve price floor to prevent Blob auctions from being scaled down to 1wei.
Prior to this change, if execution costs dominated and Blob demand remained low, the Blob base fee could trend downward until it virtually disappeared as a price signal. Layer 2 rollups pay blob fees for publishing transaction data to Ethereum, and these fees should reflect the computational and network costs charged by the blob.
When fees drop to near zero, the economic feedback loop breaks down and rollups consume capacity without paying for it proportionately. This causes the network to lose visibility into actual demand.
EIP-7918’s preliminary price floor ties blob fees to execution costs, ensuring that prices remain a meaningful signal even when demand is weak.
This avoids the free-riding problem where cheap blobs encourage wasteful usage and provides clearer data for future capacity decisions. If blob rates continue to rise despite increased capacity, the demand is real. When it collapses to the floor, there is headroom.
Early data from Hildobby’s Dune dashboard, which tracks Ethereum blobs, shows that blob fees have stabilized since Fusaka, rather than continuing the downward trend seen in previous periods.
The average number of blobs per block confirms MigaLabs’ finding that there was no spike in utilization to fill the new capacity. Blocks typically deliver fewer than the 14 blob target, and the distribution is heavily skewed toward lower counts.


What data reveals about efficiency
Fusaka succeeded in expanding its technical capabilities and demonstrating that the Blob parameter overriding mechanism works without a controversial hard fork.
The reserve price floor appears to be working as intended, preventing blob fees from becoming economically meaningless. However, utilization lags behind capacity, and reliability deteriorates noticeably at the edges of new capacity.
The miss rate curve shows that Ethereum’s current infrastructure comfortably handles the 10/15 parameters of the pre-Fusaka baseline and the first override, but starts to exceed 16 blobs.
This creates a risk profile. With layer 2 activity spiking and regularly pushing blocks up to 21 blobs, the network can face high miss rates that compromise finality and reconfiguration resistance.
Demand patterns provide another signal. The decrease in median blob usage after the first override despite the capacity increase indicates that Tier 2 rollup is not limited by current blob availability.
Either your transaction volume has not increased enough to require more blobs per block, or you are optimizing compression and batching to match your existing capacity instead of increasing usage.
Blobscan, a dedicated Blob explorer, shows individual rollups publishing blob counts that are relatively consistent over time, rather than increasing to take advantage of new headroom.
The concern before Fusaka was that limited blob capacity would create a bottleneck for Layer 2 scaling and that rollup costs would continue to rise as networks competed for scarce data availability. Fusaka has addressed capacity constraints, but appears to have developed a bottleneck.
The rollup is not filling the available space. This means that demand has not yet arrived, or that other factors such as sequencer economics, user activity, cross-rollup fragmentation, etc. are limiting growth more than blob availability.
What comes next?
Ethereum’s roadmap includes PeerDAS, a more fundamental redesign of data availability sampling that further expands blob capacity while improving decentralization and security properties.
However, the Fusaka results suggest that raw capacity is not the current binding constraint.
The network has room to grow to the 14/21 parameter before another expansion is needed, and the high blob count stability curve indicates that infrastructure upgrades may need to catch up before capacity can increase again.
Miss rate data provides clear boundary conditions. Blocks over 16 blobs still have a high miss rate, and if Ethereum increases capacity, there is a risk of systemic instability that could surface during periods of high demand.
A safer approach is to increase utilization toward the current goal, monitor whether failure rates improve as clients optimize for higher blob loads, and adjust parameters only after demonstrating that the network can reliably handle edge cases.
Fusaka’s effectiveness depends on its metrics. We successfully scaled capacity and stabilized blob pricing through our reserve tier. It did not result in an immediate increase in utilization or solve stability issues at full capacity.
The upgrades provide room for future growth, but whether that growth will be realized remains an open question that the data has yet to answer.



