Custom Data Availability Layer Development

We design and develop full-cycle blockchain solutions: from smart contract architecture to launching DeFi protocols, NFT marketplaces and crypto exchanges. Security audits, tokenomics, integration with existing infrastructure.
Showing 1 of 1 servicesAll 1306 services
Custom Data Availability Layer Development
Complex
from 2 weeks to 3 months
FAQ
Blockchain Development Services
Blockchain Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1217
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1046
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    823

Development of Custom Data Availability Layer

Data Availability is a problem most blockchain developers underestimate until they encounter it in production. Classic formulation: how can a light client convince themselves that the block producer actually published all transaction data without downloading the entire block? If data is not available—you cannot verify fraud proofs, cannot reconstruct state, cannot build a rollup on top of this layer. DA is the foundation of trust in the entire system.

Why a Custom DA Layer

Existing solutions—Celestia, EigenDA, Avail, Ethereum danksharding (EIP-4844 + full danksharding)—cover most needs. Custom DA is needed in specific cases:

  • Performance requirements exceed public DA networks (Celestia: ~8MB/block, EigenDA: up to several MB/s per operator set)
  • Private data: public DA layers publish data openly. For enterprise/permissioned apps a different model is needed
  • Specialized encoding scheme: for specific data types (e.g., ZK-proof batches with specific structure)
  • Sovereignty: unwillingness to depend on a third-party network and its tokenomics
  • Specific finality guarantees: different from existing solutions

Theoretical Foundation: Data Availability Sampling

The key innovation of modern DA layers is DAS (Data Availability Sampling). Instead of downloading the entire block, a light client randomly samples a small number of chunks. If sufficient samples are available—with high probability the entire block is available.

This works thanks to erasure coding: block data is coded with 2x redundancy (e.g., 1MB data → 2MB coded chunks). If at least 50% of chunks are available, original data can be recovered. Therefore, to hide data, the block producer must hide >50% of all chunks—easily detected via DAS.

Reed-Solomon vs. 2D Erasure Coding

Simple Reed-Solomon works in one dimension. Celestia uses 2D Reed-Solomon: data is organized in a matrix and coded by rows and columns. This allows localizing fraud proofs—prove that a specific row or column is incorrectly coded, without downloading the entire block.

Original data (k×k matrix):
[d00 d01 ... d0k]
[d10 d11 ... d1k]
...

After 2D RS encoding (2k×2k):
[d00 d01 ... d0k | p00 p01 ... p0k]  ← parity by rows
[d10 d11 ... d1k | p10 p11 ... p1k]
...
[p00 p01 ... p0k | pp00 pp01 ... ]   ← parity by columns + diagonals

KZG Polynomial Commitments

For efficient DA proofs you need a mechanism to prove that a specific chunk is part of a specific block without downloading the entire block. KZG commitments (Kate-Zaverucha-Goldberg, EIP-4844) are the optimal solution:

  1. Block data is interpreted as polynomial p(x) of degree n-1
  2. Commitment C = [p(τ)]₁ is a point on BLS12-381 elliptic curve (48 bytes)
  3. Proof for specific value p(z) = y is another point (48 bytes)
  4. Verification: pairing check e(C - [y]₁, [1]₂) = e(π, [τ - z]₂)

Proof size is constant (48 bytes) regardless of data size. Verification is one pairing operation (~2ms on modern hardware).

For custom DA layer, KZG commitments implementation looks like:

use ark_bls12_381::{Bls12_381, Fr, G1Projective};
use ark_poly::{univariate::DensePolynomial, UVPolynomial};
use ark_poly_commit::kzg10::{KZG10, Powers, VerifierKey};

pub struct DALayer {
    powers: Powers<Bls12_381>,
    vk: VerifierKey<Bls12_381>,
}

impl DALayer {
    pub fn commit_blob(&self, data: &[u8]) -> (Commitment, Vec<Fr>) {
        let scalars = bytes_to_field_elements(data);
        let poly = DensePolynomial::from_coefficients_vec(scalars.clone());
        let (commitment, _) = KZG10::commit(&self.powers, &poly, None, None).unwrap();
        (commitment, scalars)
    }
    
    pub fn generate_proof(&self, data: &[Fr], index: usize) -> Proof {
        let poly = DensePolynomial::from_coefficients_vec(data.to_vec());
        let point = Fr::from(index as u64);
        KZG10::open(&self.powers, &poly, point, &Fr::zero()).unwrap()
    }
    
    pub fn verify_chunk(
        &self,
        commitment: &Commitment,
        index: usize,
        value: Fr,
        proof: &Proof,
    ) -> bool {
        let point = Fr::from(index as u64);
        KZG10::check(&self.vk, commitment, point, value, proof).unwrap()
    }
}

Network Level: P2P and Storage

Data Distribution

Data must be distributed among sufficient nodes so that failure of any minority does not cause data unavailability. Typical scheme:

  • Producer (sequencer/block producer) cuts block into chunks, creates commitments
  • Chunks are distributed to DA nodes via P2P gossip (libp2p recommended)
  • Each DA node stores a subset of chunks and attests to their availability
  • Light clients do DAS by selecting random nodes for sampling

Availability Attestations

DA nodes publish signed attestations: "I store chunks X for block Y". These attestations are aggregated to form Data Availability Certificate (DAC). In private DA layers (enterprise) threshold signatures from trusted committees suffice. In public ones—a more complex mechanism is needed.

pub struct DAAttestation {
    pub block_hash: [u8; 32],
    pub block_height: u64,
    pub chunk_indices: Vec<u32>,  // which chunks this node stores
    pub timestamp: u64,
    pub signature: BLSSignature,  // BLS for aggregation
}

pub struct DAC {
    pub block_hash: [u8; 32],
    pub aggregate_signature: BLSAggregatedSignature,
    pub signer_bitfield: BitVec,  // which committee members signed
    pub threshold_met: bool,
}

BLS signatures are chosen deliberately: BLS supports signature aggregation—n signatures aggregate into one, critical for efficiency with large committees.

Integration with Rollup

DA layer is never a standalone product—it integrates with a rollup or other system. Rollup sequencer must:

  1. After forming a batch—send data to DA layer
  2. Wait for DAC (Data Availability Certificate)
  3. Include DAC hash in L1 state root commitment

L1 contract of rollup verifies presence of correct DAC before accepting state root:

contract RollupWithCustomDA {
    IDALayerVerifier public daVerifier;
    
    struct BatchCommitment {
        bytes32 stateRoot;
        bytes32 dataCommitment;  // KZG commitment to batch data
        bytes dacCertificate;    // aggregated BLS signature of committee
    }
    
    function submitBatch(BatchCommitment calldata batch) external {
        // verify that DA committee attested to data availability
        require(
            daVerifier.verifyDAC(
                batch.dataCommitment,
                batch.dacCertificate,
                REQUIRED_THRESHOLD  // e.g. 2/3 of committee
            ),
            "DA not certified"
        );
        
        // accept state root only if data is available
        pendingRoots[currentBatchId] = batch.stateRoot;
        emit BatchSubmitted(currentBatchId, batch.stateRoot, batch.dataCommitment);
        currentBatchId++;
    }
}

Comparison of Approaches

Approach Throughput Security Guarantees Complexity
Ethereum calldata ~375KB/block Full Ethereum security Low
EIP-4844 blobs ~768KB/block Full Ethereum security Low
Celestia ~8MB/block DAS + fraud proofs Medium
EigenDA Tens of MB/s EigenLayer restaking Medium
Custom DA (committee) Depends on config Trusted committee High
Custom DA (DAS+KZG) Scales Cryptographic guarantees Very high

Custom DA with DAS and KZG is the most complex solution. For most projects Celestia or EigenDA provide necessary throughput with proven security. Custom DA is justified when specific requirements for privacy, performance, or sovereignty cannot be met by existing solutions.