KZG Commitments and the Future of Data Availability

1. The Architectural Crisis of Decentralized Networks

The trajectory of blockchain development over the last decade has been defined by a relentless struggle against the Scalability Trilemma, a conceptual framework positing that a decentralized network can simultaneously maximize only two of three properties: decentralization, security, and scalability.1 As adoption of networks like Ethereum grew, the limitations of the monolithic architecture—where a single chain handles execution, settlement, consensus, and data availability—became painfully apparent. In this legacy model, every full node must download, verify, and store every transaction. This redundancy, while ensuring robust censorship resistance and security, imposes a severe ceiling on throughput. If block sizes are increased to accommodate global-scale transaction volumes, the hardware requirements for running a node skyrocket, inevitably centralizing the network into the hands of a few data center operators and compromising the very decentralization that gives the network its value.3

The industry’s response has been a paradigm shift toward modularity. The modular blockchain thesis advocates for unbundling the core functions of a blockchain into specialized layers. Execution is offloaded to high-throughput rollups (Layer 2s), which process transactions off-chain and post succinct proofs to the main chain. However, this shift exposed a new, more fundamental bottleneck: Data Availability (DA).2

1.1 defining the Data Availability Problem

The Data Availability problem is distinct from the problem of long-term data storage. Storage is concerned with retrievability of historical data (e.g., “What was the state of the ledger five years ago?”). Data Availability, by contrast, is a real-time publishing guarantee that answers a critical question: “Has the data required to verify the latest block been published to the network right now?”.5

In the context of Layer 2 rollups, this guarantee is existential. Rollups achieve scalability by performing computation off-chain and submitting a cryptographic summary (a state root) to the Layer 1 (L1).

  • Optimistic Rollups rely on a “challenge period” where honest actors can submit a fraud proof if the state root is invalid. However, to construct a fraud proof, the challenger requires access to the raw transaction data associated with the state transition. If a malicious sequencer publishes a state root but withholds the underlying transaction data, no fraud proof can be generated. The invalid state finalizes, and funds can be stolen.
  • Zero-Knowledge (ZK) Rollups provide validity proofs that ensure the state transition is mathematically correct. However, if the data is withheld, the system can enter a failure state where users know the chain is valid but do not know the current state of their own accounts (the “frozen state” problem), preventing them from generating the Merkle proofs needed to withdraw funds to the L1.7

Thus, the security of any rollup is strictly bounded by the capacity of its underlying Data Availability layer. Prior to recent upgrades, Ethereum rollups were forced to use calldata—a storage space in the Ethereum Virtual Machine (EVM) designed for function arguments—to publish transaction batches. This method was inefficient and prohibitively expensive, with over 90% of rollup transaction fees often attributed to the cost of posting data to Ethereum.4 This economic reality created a “data tax” that prevented decentralized applications from achieving the cost structures necessary for mass adoption.

1.2 The Transition from Replication to Sampling

To solve the DA bottleneck, blockchain architects faced a paradox: how can the network verify that a massive block of data (potentially gigabytes in size) is available without every node downloading the entire block? The solution lies in shifting the verification model from replication (everyone downloads everything) to sampling (everyone downloads a tiny piece).

Data Availability Sampling (DAS) allows nodes to verify data availability with high probabilistic certainty by downloading random chunks of the block.9 However, simple sampling is insufficient; a malicious actor could hide a single critical transaction (a “1-of-N” attack), invalidating the entire block while evading detection by random sampling. To mitigate this, the data must be “erasure coded”—a technique that extends the dataset with redundant parity data, such that the entire dataset can be reconstructed from a subset of the chunks.5

If a block is erasure-coded such that any 50% of the chunks can reconstruct the whole, an attacker must withhold at least 50% of the data to make the block unavailable. The probability of a light node randomly sampling chunks and not detecting this massive withholding approaches zero exponentially with each sample taken.

The implementation of DAS requires a cryptographic commitment scheme to ensure that the erasure coding is performed correctly—that the extended data is mathematically consistent with the original data. While Merkle trees have served as the industry standard for commitments, they are inefficient for this specific purpose due to logarithmic proof sizes and the complexity of proving erasure coding correctness. The industry has thus coalesced around a more advanced primitive: Kate-Zaverucha-Goldberg (KZG) commitments.12

2. Mathematical Foundations of KZG Commitments

KZG commitments, introduced by Aniket Kate, Gregory M. Zaverucha, and Ian Goldberg in 2010, are a class of polynomial commitment schemes that offer properties uniquely suited to the constraints of blockchain scalability: constant-sized proofs and homomorphic properties.12 Understanding KZG requires a deep dive into the underlying algebra of elliptic curves and polynomials.

2.1 Polynomial Representation of Data

At the core of the KZG scheme is the representation of data as a polynomial. Any vector of data $D = (d_0, d_1,…, d_{n-1})$ can be transformed into a polynomial $P(x)$ of degree $n-1$. This is typically achieved using Lagrange interpolation, constructing a polynomial that passes through specific points. For efficiency, these points are often chosen to be the roots of unity $\omega$ in a finite field, such that $P(\omega^i) = d_i$.13

Once the data is encoded as a polynomial $P(x)$, the problem of proving data integrity becomes a problem of proving properties about the polynomial. The “magic” of KZG is that it allows a prover to commit to this entire polynomial using a single elliptic curve point (typically 48 bytes), regardless of the polynomial’s degree (i.e., the size of the dataset).

2.2 The Role of Elliptic Curve Pairings

KZG commitments rely on pairing-friendly elliptic curves, such as BLS12-381. A pairing is a bilinear map $e: \mathbb{G}_1 \times \mathbb{G}_2 \rightarrow \mathbb{G}_T$, where $\mathbb{G}_1$ and $\mathbb{G}_2$ are additive groups of points on an elliptic curve, and $\mathbb{G}_T$ is a multiplicative target group. The critical property of this map is bilinearity:

 

$$e(a \cdot P, b \cdot Q) = e(P, Q)^{ab}$$

 

This property allows the verifier to perform multiplication in the exponent, which is essential for checking polynomial equations “in the encrypted domain” without knowing the secret parameters.12

2.3 The Trusted Setup: Powers of Tau

Unlike Merkle trees, which use public hash functions, KZG requires a Structured Reference String (SRS). This SRS is generated via a “Trusted Setup” ceremony, often referred to as the “Powers of Tau.”

The setup involves generating a secret scalar $\tau$ (tau) and computing a sequence of points:

 

$$SRS = \{ G \cdot \tau^0, G \cdot \tau^1, G \cdot \tau^2,…, G \cdot \tau^d \}$$

 

where $G$ is the generator of the group and $d$ is the maximum degree of the polynomials to be committed. Crucially, the value $\tau$ must be destroyed immediately after the setup. If an attacker possesses $\tau$, they can construct a polynomial $P'(x)$ such that $P'(\tau) = P(\tau)$, allowing them to forge commitments and proofs—a catastrophic security failure known as “toxic waste”.17

To mitigate this risk, modern trusted setups utilize Multi-Party Computation (MPC). The secret is generated sequentially by thousands of participants. Participant 1 contributes randomness $s_1$, Participant 2 contributes $s_2$, and so on. The final secret is $\tau = s_1 \cdot s_2 \cdot… \cdot s_n$. For the setup to be secure, only one participant needs to be honest and destroy their contribution. The Ethereum EIP-4844 ceremony, for instance, involved over 140,000 contributions, making the probability of collusion statistically negligible.18

2.4 Commitment, Evaluation, and Verification

The mechanics of the scheme operate in three stages:

  1. Commitment:

To commit to a polynomial $P(x) = \sum_{i=0}^{n} p_i x^i$, the prover computes a linear combination of the SRS elements:

 

$$C = \sum_{i=0}^{n} p_i (G \cdot \tau^i) = G \cdot \sum_{i=0}^{n} p_i \tau^i = G \cdot P(\tau)$$

 

The result $C$ is a single group element that binds the prover to the polynomial $P(x)$.12

  1. Evaluation (Opening):

To prove that the polynomial evaluates to $y$ at a specific point $z$ (i.e., $P(z) = y$), the prover utilizes the polynomial remainder theorem. If $P(z) = y$, then the polynomial $P(x) – y$ is divisible by $(x – z)$. The prover computes the quotient polynomial:

 

$$Q(x) = \frac{P(x) – y}{x – z}$$The proof (or witness) $\pi$ is the commitment to this quotient polynomial:$$\pi = G \cdot Q(\tau)$$

  1. Verification:

The verifier knows the commitment $C$, the point $z$, the value $y$, and the proof $\pi$. They must check that the relationship $P(\tau) – y = Q(\tau)(\tau – z)$ holds. Since they do not know $\tau$, they use the pairing function:

 

$$e(C – G \cdot y, H) \stackrel{?}{=} e(\pi, H \cdot \tau – H \cdot z)$$

 

where $H$ is the generator of $\mathbb{G}_2$. If the equality holds, the verifier is cryptographically convinced that $P(z) = y$.12

2.5 Advantages Over Merkle Trees

The superiority of KZG over Merkle trees in the context of Data Availability lies in succinctness and homomorphic properties.

Feature Merkle Trees KZG Commitments
Proof Size Logarithmic $O(\log n)$ – grows with data size. Constant $O(1)$ – fixed size (e.g., 48 bytes) regardless of data size.
Verification Cost Low (hashing). Moderate (Elliptic curve pairings).
Setup Transparent (no trusted setup). Requires Trusted Setup (MPC).
Data Extension Requires Fraud Proofs for incorrect encoding. Inherently proves correct encoding via polynomial binding.
Sampling Efficiency Lower; requires branch data for every sample. Higher; proof is constant, enables batching.

The constant proof size is particularly critical for Data Availability Sampling (DAS). In a Merkle-based system, as the block size grows, the branches required to prove the inclusion of a sample grow, consuming valuable bandwidth. In a KZG-based system, the proof for any sample—or even a batch of samples—remains a single, tiny group element. This efficiency is the cornerstone of Ethereum’s Danksharding roadmap.13

3. Ethereum’s Modular Evolution: EIP-4844 and Proto-Danksharding

The theoretical advantages of KZG commitments were translated into practical reality with the implementation of EIP-4844, also known as Proto-Danksharding, included in the Dencun upgrade of March 2024. This upgrade marked the official beginning of Ethereum’s transition from a monolithic execution layer to a modular data settlement layer.8

3.1 The Architecture of Blob-Carrying Transactions

EIP-4844 introduced a new transaction type (Type 3) that carries “blobs” (Binary Large Objects). Unlike calldata, which is accessible to the EVM and stored permanently, blobs are ephemeral data chunks stored on the consensus layer (Beacon Chain) and are inaccessible to smart contracts. The EVM can only access a “versioned hash” of the blob’s KZG commitment.13

Blob Specifications:

  • Size: Each blob is approximately 128 KB, consisting of 4096 field elements of 32 bytes each.
  • Capacity: The upgrade initially targets 3 blobs per block (0.375 MB) with a maximum of 6 blobs (0.75 MB).
  • Persistence: Blobs are stored by consensus nodes for approximately 18 days (4096 epochs) before being pruned. This window is sufficient for L2s to challenge invalid state roots, but short enough to prevent state bloat for validators.22

3.2 The Blob Gas Market

A critical economic innovation of EIP-4844 is the separation of the fee market. Historically, all Ethereum transactions competed for the same “gas,” causing L2 data costs to spike whenever there was high demand for NFT mints or DeFi swaps on L1. EIP-4844 introduces “Blob Gas,” a multidimensional fee market that operates independently of standard execution gas.

The price of blob gas adjusts dynamically based on supply and demand, targeting an average of 3 blobs per block. If usage exceeds this target, the price increases exponentially; if it falls below, the price decays. This mechanism ensures that rollups have a predictable and generally low-cost environment for data availability, insulated from the congestion of the execution layer. Research indicates that this mechanism reduced L2 data costs by over 90% immediately following implementation, commoditizing blockspace and driving the marginal cost of L2 transactions toward zero.10

3.3 The Point Evaluation Precompile

While the blob data itself is inaccessible to the EVM, rollups—particularly ZK-rollups—need a way to prove that specific data was included in a committed blob. To facilitate this, EIP-4844 includes a point_evaluation precompile. This precompile allows a smart contract to verify a KZG proof. A contract can provide a commitment $C$, a point $z$, a value $y$, and a proof $\pi$, and the precompile will verify that $P(z) = y$ fundamentally linking the ephemeral data layer to the persistent execution layer.13

4. The Road to Full Danksharding: Scaling via Sampling

Proto-Danksharding (EIP-4844) alleviated the immediate cost bottleneck but did not solve the fundamental scalability constraint: every validator must still download every blob to verify its availability. To scale from the current ~0.375 MB per block to the target of 16 MB–32 MB (Full Danksharding), the network must adopt Data Availability Sampling (DAS).

4.1 The Mechanics of Sampling and Erasure Coding

Full Danksharding envisions a network where no single node downloads the entire block. Instead, verification is a collective effort derived from the statistical probability of successful sampling.

To make this secure, the system utilizes 2D Erasure Coding. The data in a block is arranged into a matrix.

  1. Extension: The data polynomial $P(x)$ for each row and column is evaluated at $2n$ points, doubling the size of the data. This redundancy ensures that any 50% of the data is sufficient to reconstruct the whole.25
  2. Sampling: Nodes randomly sample cells from this matrix. A light client might request 75 random samples. If all samples are returned successfully, the probability that less than 50% of the data is available (the reconstruction threshold) is less than $2^{-75}$.

This architecture fundamentally changes the relationship between bandwidth and throughput. In a monolithic chain, throughput is capped by the bandwidth of a single node. In a Danksharded chain, throughput scales with the number of nodes. As more nodes join the network and perform sampling, the total bandwidth capacity of the network increases, allowing for larger blocks and higher throughput.8

4.2 PeerDAS: The Bridge to Scalability

Transitioning directly from Proto-Danksharding to Full Danksharding is a monumental engineering challenge involving complex 2D erasure coding schemes and massive P2P networking upgrades. The intermediate step, currently in development, is PeerDAS (EIP-7594).

PeerDAS (Peer Data Availability Sampling) introduces the concept of distributed custody without the full complexity of the 2D matrix. In PeerDAS:

  • 1D Erasure Coding: Existing blobs are erasure-coded individually (1D), rather than as a grid.
  • Custody Subnets: Validators are assigned to specific “subnets,” each responsible for storing a fraction of the columns of the erasure-coded data. For example, a validator might be assigned to store only the 8th column of every blob.
  • Sampling: When verifying a block, a node does not download the blobs. Instead, it queries its peers in the P2P network for samples. If it receives valid samples (verified via KZG proofs), it considers the block available.

PeerDAS allows the network to safely increase the blob target from 3 to potentially 64 or higher, scaling throughput by an order of magnitude while maintaining the decentralization requirement that nodes can run on consumer hardware.22

4.3 Networking Constraints and Challenges

The primary bottleneck for Full Danksharding is not cryptography, but networking. Propagating 32 MB blocks every 12 seconds requires immense bandwidth.

  • Discovery: Nodes must efficiently discover peers that hold the specific data samples they need.
  • Gossip: Traditional gossip protocols flood the network with messages. With millions of samples, this overhead is unsustainable.
  • Reconstruction: If a block is partially missing, the network must be able to perform distributed reconstruction, pulling shards from various nodes to rebuild the full block.

These challenges are being addressed through optimizations in the libp2p layer and new sub-protocols designed specifically for high-volume data sharding.22

5. Comparative Analysis of Data Availability Layers

The modular thesis has given rise to a competitive landscape of specialized Data Availability layers, each taking a distinct philosophical and technical approach to solving the DA problem. The three primary contenders—Celestia, Avail, and EigenDA—represent different trade-offs in the design space.

5.1 Celestia: The Optimistic Approach

Celestia acts as a sovereign Layer 1 blockchain optimized exclusively for data availability and ordering.

  • Proof Mechanism: Celestia uses Fraud Proofs and Namespaced Merkle Trees (NMTs). Data is erasure-coded using 2D Reed-Solomon codes. Light nodes perform sampling to verify availability. However, Celestia does not strictly prove that the erasure coding is correct at the time of block generation. Instead, it relies on an optimistic assumption: if a block is incorrectly encoded, a full node will generate a “Bad Encoding Fraud Proof” and broadcast it to the network.29
  • Implications: This design avoids the computational overhead of generating validity proofs (like KZG) but introduces a latency trade-off. Light nodes cannot be instantly certain of finality; they must wait for a network propagation window to ensure no fraud proof has been issued.
  • Economics: Celestia uses its own token (TIA) for data fees, creating an independent economic security model decoupled from Ethereum.

5.2 Avail: The Validity Proof Approach

Avail (formerly Polygon Avail) is a sovereign chain that prioritizes immediate cryptographic guarantees.

  • Proof Mechanism: Avail utilizes KZG Commitments and Validity Proofs. Like Celestia, it uses erasure coding and sampling. Unlike Celestia, every block includes KZG proofs that cryptographically certify that the data is correctly erasure-coded.
  • Implications: This approach eliminates the need for fraud proofs and the associated waiting period. A light node that samples data and verifies the KZG proofs has immediate, mathematical certainty that the data is available and retrievable. This creates a “thicker” light client with stronger security guarantees, suitable for high-speed bridging and mobile verification.29
  • Economics: Avail is secured by its own validator set and token (AVAIL), positioning itself as a neutral DA layer for multiple ecosystems (Ethereum, Polkadot, Cosmos).

5.3 EigenDA: The Restaking Approach

EigenDA is not a standalone blockchain but a set of smart contracts and a data availability committee (DAC) secured by Ethereum validators via EigenLayer.

  • Proof Mechanism: EigenDA uses KZG Commitments for data validity but differs in its storage model. It does not (in its initial iteration) rely on P2P sampling (DAS). Instead, it relies on a dispersed committee of nodes that sign attestations confirming they have stored the data.
  • Implications: By avoiding the consensus overhead of a blockchain (ordering transactions), EigenDA achieves massive throughput, targeting 10 MB/s or higher. Its security is derived from “Restaking”—validators stake ETH and subject it to slashing conditions defined by EigenLayer contracts.29
  • Economics: EigenDA aligns economically with Ethereum. Fees can be paid in ETH or other tokens, and security is rooted in the value of ETH, reducing the fragmentation of economic trust.

5.4 Comparison of Key Metrics

The following table summarizes the key architectural differences and performance metrics of the leading DA solutions.

Metric Ethereum (EIP-4844) Celestia Avail EigenDA
Primary Primitive KZG Commitments Merkle Trees (NMT) KZG Commitments KZG Commitments
Verification Model Full Download (Current) / DAS (Future) DAS + Fraud Proofs DAS + Validity Proofs Committee Attestations
Erasure Coding 1D (Current) / 2D (Future) 2D Reed-Solomon 2D Reed-Solomon 1D / Dispersal
Finality Time ~12 mins (Epoch) ~15 sec (Block) / ~10 min (DA certainty) ~40 sec (Block + DA) ~12 mins (ETH L1 Finality)
Security Source Ethereum Stake (ETH) Celestia Stake (TIA) Avail Stake (AVAIL) Restaked ETH
Cost Efficiency Moderate (Blob Market) High (SuperBlobs) High High (Horizontal Scaling)

Cost Analysis: Recent data indicates a significant divergence in pricing. While Ethereum blobs reduced costs significantly, the open market for blobs can still experience congestion. Celestia, by specializing solely in DA, offers lower baseline costs. For instance, “SuperBlobs” on Celestia have demonstrated costs as low as $0.81 per MB, compared to ~$20 per MB for Ethereum blobs during congested periods.33 This 25x cost differential drives a market segmentation where high-value financial rollups stick to Ethereum for maximum security, while high-volume gaming or social chains migrate to alt-DA layers.

6. Advanced Topics and Future Outlook

The adoption of KZG commitments is not limited to Data Availability. It is part of a broader unification of Ethereum’s cryptographic infrastructure.

6.1 Verkle Trees and State Commitments

Ethereum is currently planning to replace its Merkle-Patricia Trie (used for storing account balances and state) with Verkle Trees. Verkle trees combine Vector Commitments (a derivative of KZG) with the tree structure.

  • The Problem: Merkle proofs are large. Proving the value of a single account in Ethereum’s state requires a proof of several kilobytes. This makes “Stateless Clients”—nodes that verify blocks without storing the entire 100 GB+ state—impossible.
  • The Solution: Verkle trees allow for massive “width” (e.g., 256 children per node) because the commitment to the children is a single KZG commitment. A proof of inclusion in a Verkle tree is constant-sized, regardless of the tree’s width.
  • Synergy: By moving both DA (blobs) and State (Verkle) to KZG-based schemes, Ethereum unifies its cryptography. A stateless client can verify the execution of a block (via Verkle witnesses) and the availability of data (via Blob KZG proofs) using the same libraries and trusted setup parameters.15

6.2 New Coding Paradigms: RLNC vs. Reed-Solomon

While Reed-Solomon (RS) codes are the industry standard for erasure coding, emerging research highlights potential efficiencies in Random Linear Network Coding (RLNC) and Polar Codes.

  • The Limitation of RS: Standard RS codes require a fixed rate of redundancy (e.g., 2x extension). The “Sampling by Indexing” paradigm limits light nodes to sampling from a pre-determined set of coded symbols.
  • The RLNC Advantage: RLNC introduces “Sampling by Coding.” Instead of pre-calculating the code, the coder generates coded symbols on-the-fly using random coefficients. Research suggests that a single RLNC sample is “more expressive” than an RS sample, potentially providing the same security guarantees with significantly fewer samples (e.g., 1 RLNC sample $\approx$ 73 RS samples). This could reduce the download bandwidth for light nodes by orders of magnitude, though it introduces higher computational complexity for the data provider.25

6.3 Quantum Resistance and Long-Term Risks

A fundamental risk associated with the shift to KZG is the vulnerability of elliptic curve cryptography to quantum computing. The security of KZG relies on the Discrete Logarithm assumption, which can be broken by Shor’s algorithm on a sufficiently powerful quantum computer.

  • The Threat: If a quantum computer were built today, it could forge KZG proofs, allowing an attacker to claim data is available when it is not, or to create invalid state transitions in Verkle trees.
  • Mitigation: Merkle trees (used by Celestia) are hash-based and generally considered quantum-resistant. Ethereum is exploring “post-quantum” replacements for KZG, such as STARK-based commitments (FRI), which rely only on hashes. However, these currently suffer from larger proof sizes. The roadmap implicitly bets that quantum computing is sufficiently far away to allow for a future migration to these newer primitives.16

7. Economic and Strategic Implications

The technological shift toward KZG-enabled modularity has profound economic ripple effects.

7.1 The Commoditization of Blockspace

Historically, blockspace was a scarce, premium asset. The move to modular DA layers turns blockspace into a commodity. With PeerDAS and competitors like EigenDA scaling to 10s of MB/s, the supply of blockspace is exploding.

  • The “Race to the Bottom”: Because data bytes are fungible (1 MB on Avail is functionally similar to 1 MB on Celestia), pricing power will erode. DA layers will likely compete on marginal cost, driving fees toward zero.
  • Value Capture: This suggests that DA layers may struggle to capture value solely through data fees. Value accrual may shift to the sequencing layer (MEV) or the settlement layer (where liquidity resides).

7.2 Security Fragmentation

The proliferation of DA layers introduces “Security Fragmentation.” A rollup that settles on Ethereum but uses Celestia for DA (a Validium) is only as secure as the weakest link. If Celestia’s validator set fails or halts, the funds on the Ethereum rollup become frozen. This breaks the “shared security” promise of the original rollup roadmap.

However, it enables a tiered ecosystem:

  1. High-Security Tier: Financial rollups using Ethereum Blobs (High cost, Max security).
  2. Mid-Security Tier: Consumer dApps using EigenDA (Restaked security).
  3. Sovereign Tier: Gaming chains using Celestia/Avail (Lowest cost, Independent security).

7.3 The “Thick” Client Revolution

Perhaps the most transformative implication of KZG and DAS is the empowerment of the end-user. In the monolithic era, only data centers could verify the chain. In the modular era, a smartphone running a light client can sample the network, verify KZG proofs, and check Verkle witnesses. This restores the “cypherpunk” vision of a trustless network where every user can independently verify the integrity of the ledger, regardless of the scale of the system.

8. Conclusion

The integration of KZG commitments into blockchain architecture represents a watershed moment in the history of decentralized systems. It marks the transition from a naive model of verification-by-replication to a sophisticated model of verification-by-mathematics. By condensing massive datasets into constant-sized proofs, KZG commitments solve the Data Availability paradox, enabling networks to scale throughput linearly with node count—a feat previously thought impossible.

The implementation of EIP-4844 has already demonstrated the economic power of this shift, slashing Layer 2 costs and stimulating a renaissance in rollup development. As the roadmap advances toward Full Danksharding and PeerDAS, and as competitors like Celestia and Avail push the boundaries of throughput, we are witnessing the construction of the “broadband era” of blockchain.

While challenges remain—particularly regarding the trusted setup, computational overhead, and long-term quantum resistance—the architectural victory of modularity is clear. The future of data availability is cryptographic, sampled, and exponentially scalable, paving the way for decentralized applications to finally rival the performance and cost-efficiency of centralized web infrastructure.

Works cited

  1. What Are Layer 2 Scaling Solutions? | Starknet, accessed on December 21, 2025, https://www.starknet.io/blog/layer-2-scaling-solutions/
  2. The Modular Blockchain Thesis – Medium, accessed on December 21, 2025, https://medium.com/@prezzel/the-modular-blockchain-thesis-bc7d11ed4e98
  3. How do layer 2 scaling solutions improve blockchain performance? – Quora, accessed on December 21, 2025, https://www.quora.com/How-do-layer-2-scaling-solutions-improve-blockchain-performance
  4. What Are Data Availability Layers? Why DA Is the Biggest Bottleneck in Web3 – Digitap app, accessed on December 21, 2025, https://digitap.app/news/guide/what-are-data-availability-layers
  5. Data Availability Layer – One of the critical layers to Blockchain Scalability & Security – rakeshgidwani.com, accessed on December 21, 2025, https://rakeshgidwani.com/data-availability-layer-one-of-the-critical-layers-to-blockchain-scalability-security/
  6. Research on Blockchain Data Availability and Storage Scalability – MDPI, accessed on December 21, 2025, https://www.mdpi.com/1999-5903/15/6/212
  7. Zero-Knowledge Proofs: KZG Polynomial Commitment and Verification | by Abhiveer Singh, accessed on December 21, 2025, https://medium.com/@abhiveerhome/zero-knowledge-proofs-kzg-polynomial-commitment-and-verification-5a82d62fdefd
  8. Danksharding and Proto-danksharding Explained – OneKey, accessed on December 21, 2025, https://onekey.so/blog/ecosystem/danksharding-and-proto-danksharding-explained/
  9. A comparison between DA layers – General – Celestia Forum, accessed on December 21, 2025, https://forum.celestia.org/t/a-comparison-between-da-layers/899
  10. Proto-danksharding: What It Is and How It Works | Galaxy, accessed on December 21, 2025, https://www.galaxy.com/insights/research/protodanksharding-what-it-is-and-how-it-works
  11. Comparison of Data Availability Solutions | by 0xemre – Medium, accessed on December 21, 2025, https://0xemre.medium.com/comparison-of-data-availability-solutions-ec89dbeb222e
  12. KZG (Kate-Zaverucha-Goldberg) Commitments | by Ankita Singh – Medium, accessed on December 21, 2025, https://medium.com/@aannkkiittaa/kzg-kate-zaverucha-goldberg-commitments-2e08b4fa3b4b
  13. Proto-Danksharding: EIP-4844, accessed on December 21, 2025, https://www.eip4844.com/
  14. Polynomial Commitments – Centre For Applied Cryptographic Research, accessed on December 21, 2025, https://cacr.uwaterloo.ca/techreports/2010/cacr2010-10.pdf
  15. What are Verkle Trees & KZG Commitments, and could they be applied on Bitcoin? | by shymaa arafat | Medium, accessed on December 21, 2025, https://medium.com/@shymaa.arafat/what-are-verkle-trees-kzg-commitments-and-could-they-be-applied-on-bitcoin-cbf4838d18ac
  16. A Simple Guide to KZG Commitments and Why Ethereum Needs Them to Scale, accessed on December 21, 2025, https://hackernoon.com/a-simple-guide-to-kzg-commitments-and-why-ethereum-needs-them-to-scale
  17. Kate-Zaverucha-Goldberg (KZG) Constant-Sized Polynomial …, accessed on December 21, 2025, https://alinush.github.io/kzg
  18. Implementing Trusted Setup Ceremony for Ethereum’s EIP-4844 – Reilabs, accessed on December 21, 2025, https://reilabs.io/blog/implementing-trusted-setup-ceremony-for-ethereums-eip-4844/
  19. KZG Ceremony: Participate and Help Build Ethereum – Consensys, accessed on December 21, 2025, https://consensys.io/blog/kzg-ceremony-participate-and-help-build-ethereum
  20. The EIP-4844: What is Proto-danksharding? | Crypto Academy – Finst, accessed on December 21, 2025, https://finst.com/en/learn/articles/what-is-proto-danksharding
  21. Understanding EIP-4844 and Proto-Danksharding: How It Impacts Ethereum Scaling | by Ege Yag | Medium, accessed on December 21, 2025, https://medium.com/@egeyag/understanding-eip-4844-and-proto-danksharding-how-it-impacts-ethereum-scaling-16e00067c754
  22. Data Availability in Ethereum: Proto Danksharding and PeerDAS | by Hans Vuong – Medium, accessed on December 21, 2025, https://medium.com/@vuonghuuhung2002/data-availability-in-ethereum-proto-danksharding-and-peerdas-96059847b387
  23. Impact Of EIP-4844 On Ethereum: What You Need To Know – Hacken.io, accessed on December 21, 2025, https://hacken.io/discover/eip-4844-explained/
  24. EIP-4844: The first step towards Ethereum full sharding | Foresight Ventures on Binance Square, accessed on December 21, 2025, https://www.binance.com/en/square/post/312297
  25. (PDF) From Indexing to Coding: A New Paradigm for Data …, accessed on December 21, 2025, https://www.researchgate.net/publication/395943737_From_Indexing_to_Coding_A_New_Paradigm_for_Data_Availability_Sampling
  26. EIP-7594 (PeerDAS) – Decentralized Finance | IQ.wiki, accessed on December 21, 2025, https://iq.wiki/wiki/eip-7594-peerdas
  27. PeerDAS and a 48 blob target in 2025 – Optimism, accessed on December 21, 2025, https://www.optimism.io/blog/peerdas-and-a-48-blob-target-in-2025
  28. Scaling Ethereum with PeerDAS and Distributed Blob Building – Sigma Prime, accessed on December 21, 2025, https://blog.sigmaprime.io/peerdas-distributed-blob-building.html
  29. Choosing Your Data Availability Layer – Celestia, Avail, and EigenDA Compared, accessed on December 21, 2025, https://www.eclipselabs.io/blogs/choosing-your-data-availability-layer-celestia-avail-eigenda-compared
  30. What is Celestia (TIA) : A Comprehensive Overview – Imperator.co, accessed on December 21, 2025, https://www.imperator.co/resources/blog/celestia-blockchain-presentation
  31. L2 Data Availability Layer: A Comparison of Celestia, EigenDA, and Avail | Technorely, accessed on December 21, 2025, https://technorely.com/insights/l-2-data-availability-layer-a-comparison-of-celestia-eigen-da-and-avail
  32. EigenDA vs. Celestia vs. Avail vs. NearDA: Who wins on cost & scale? – AMBCrypto, accessed on December 21, 2025, https://eng.ambcrypto.com/eigenda-vs-celestia-vs-avail-vs-nearda-who-wins-on-cost-scale/
  33. Data Availability Costs: Ethereum Blobs Vs. Celestia – Conduit, accessed on December 21, 2025, https://www.conduit.xyz/blog/data-availability-costs-ethereum-blobs-celestia/
  34. Verkle trie for Eth1 state – Dankrad Feist, accessed on December 21, 2025, https://dankradfeist.de/ethereum/2021/06/18/verkle-trie-for-eth1.html
  35. Verkle trees, accessed on December 21, 2025, https://vitalik.eth.limo/general/2021/06/18/verkle.html
  36. From Indexing to Coding: A New Paradigm for Data Availability Sampling – arXiv, accessed on December 21, 2025, https://www.arxiv.org/pdf/2509.21586