Decentralized Compute Network Development

We design and develop full-cycle blockchain solutions: from smart contract architecture to launching DeFi protocols, NFT marketplaces and crypto exchanges. Security audits, tokenomics, integration with existing infrastructure.
Showing 1 of 1 servicesAll 1306 services
Decentralized Compute Network Development
Complex
from 2 weeks to 3 months
FAQ
Blockchain Development Services
Blockchain Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1217
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1046
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    823

Development of Decentralized Compute Network

Centralized clouds — AWS, GCP, Azure — control 65% of global cloud computing market. Three companies decide who gets infrastructure access, at what price, and on what terms. For most applications this is acceptable. For AI inference, rendering, scientific computing, and any workload where price, censorship resistance, or geographic distribution matters — it's not. Decentralized compute network builds a market between those with excess GPU/CPU and those who need them.

The problem is that building such market is technically harder than it looks. You need to solve three fundamental tasks: verify that computation was actually executed correctly; protect against dishonest providers and clients; ensure performance comparable to centralized alternatives.

Computation Verification: Central Technical Problem

Three Approaches and Their Compromises

Provider claims they ran your task and got result X. How does contract verify this without recalculating itself? If contract recalculates — you pay double compute cost. This is the main verification problem.

Optimistic execution with challenge period. Result accepted as valid if no one disputed within window (usually 7 days). Upon dispute — verification game: both sides alternately narrow disagreement to single computation step, verifiable on-chain. This is Truebit approach, adapted variant used in Arbitrum for EVM disputes.

Key parameters: provider stake size (must exceed fraud profit), challenge window length (compromise between security and speed), number of verifiers in game. Weakness: if client and provider collude, or challenge window chosen wrong for task class.

Trusted Execution Environment (TEE). Computation happens in isolated enclave — Intel SGX, AMD SEV, ARM TrustZone. Hardware attestation mechanism proves specific code executes in specific environment without operator intervention. Contract verifies attestation on-chain.

interface ITEEVerifier {
    // mrenclave — unique hash of code in enclave
    // report — signed Intel IAS report
    function verifyAttestation(
        bytes32 mrenclave,
        bytes calldata report,
        bytes calldata signature
    ) external view returns (bool);
}

Used in iExec (TEE tasks), Phala Network (Phat Contracts), Marlin Protocol. Weaknesses: Intel SGX had several serious vulnerabilities (Spectre/Meltdown variants, SGAxe), hardware manufacturer dependency, supply chain complexity for equipment verification.

Cryptographic verification via ZK-proofs. Provider generates ZK-proof they executed correct computation on input data. On-chain verifier checks proof in O(1) regardless of computation complexity. Strongest approach in terms of guarantees, most expensive in proof generation overhead.

For simple deterministic tasks (hashing, basic arithmetic) — Groth16 or PLONK via circom. For complex computations — zkVM (RISC Zero, SP1): write program in Rust, zkVM generates proof for any Rust program execution. Proof generation cost on RISC Zero: $0.01 to $1 depending on complexity, 10–300 seconds.

In practice most protocols 2024–2025 use hybrid: TEE for primary verification (fast, cheap) + optimistic challenge for cases where TEE unavailable or compromised.

Computation Determinism

For any verification method, computation must be deterministic: same code with same inputs must give exactly identical result on any hardware. Not trivial requirement.

Problems: floating-point operations give different results on different CPU architectures and compilers; GPU computations non-deterministic by default due to parallelism; multithreading with race conditions; system time or randomness dependency.

Solutions: WebAssembly computations (deterministic by spec); integer arithmetic instead of float; deterministic ML frameworks (fixed seed, disabled non-deterministic parallelism); container isolation with fixed environment (Docker image pinning by SHA256).

Protocol Architecture

Matching and Scheduling Layer

Compute market requires efficient task-provider matching mechanism. Two main approaches:

On-chain order book. Client creates task with price, provider accepts. Transparent, but slow (block latency) and expensive (gas per operation). Suits long tasks where matching latency non-critical.

Off-chain matching with on-chain settlement. Separate matching layer (centralized or operator set) matches tasks and providers, result confirmed on-chain. Fast, but adds trusted component. iExec and Akash use this approach.

Task structure at contract level:

struct ComputeTask {
    bytes32 taskId;
    address client;
    bytes32 appHash;          // IPFS CID docker image
    bytes32 datasetHash;      // input data
    uint256 maxComputePrice;  // in protocol tokens
    uint256 trust;            // required replica level
    uint64  category;         // machine size (CPU/RAM/GPU)
    uint256 timeout;
    TaskStatus status;
}

enum TaskStatus { Unset, Active, Revealing, Finalizing, Completed, Failed }

Replication for Trust Increase

For high-value tasks, run computation on N providers and verify results via consensus. At 3 providers with majority result — successful attack probability drops sharply (requires 2-of-3 collusion).

Commitment-reveal scheme: providers first publish result hash (commitment), after all committed — reveal result. Prevents answer copying from each other.

// Phase 1: Commit
function submitResultHash(bytes32 taskId, bytes32 resultHash) external {
    require(isTaskProvider(taskId, msg.sender), "Not assigned provider");
    require(task.status == TaskStatus.Active, "Wrong status");
    resultCommitments[taskId][msg.sender] = resultHash;
    emit ResultCommitted(taskId, msg.sender);
}

// Phase 2: Reveal
function revealResult(bytes32 taskId, bytes calldata result) external {
    bytes32 commitment = resultCommitments[taskId][msg.sender];
    require(keccak256(result) == commitment, "Hash mismatch");
    revealedResults[taskId][msg.sender] = result;
    _tryFinalize(taskId);
}

Economic Layer

Token model for decentralized compute network typically includes:

Parameter Typical Range Purpose
Provider stake 5–100 RLC / USDC Fraud collateral
Slash on violation 10–50% stake Deterrence
Protocol fee 1–5% Treasury
Scheduler fee 1–3% Matching layer

Important nuance: token must be used for compute payment (utility), not just governance. Pure governance tokens in compute markets don't create sufficient demand-side pressure.

End-to-End Task Workflow

  1. Client uploads Docker image to IPFS, gets content hash.
  2. Client creates task on-chain: app hash, dataset hash, parameters, deposit.
  3. Matching layer or scheduler assigns provider(s).
  4. Provider downloads image + data, executes in TEE or standard container, generates result + attestation/proof.
  5. Provider publishes commitment on-chain.
  6. After reveal and consensus — result finalized, provider paid.
  7. Client downloads result from IPFS.

Off-Chain Network Components

Worker node — provider's main component. Daemon monitoring blockchain for new tasks, downloads and executes workload, publishes results. Stack: Go or Rust, Docker SDK for container isolation, SGX SDK for TEE tasks.

Scheduler / Core — for off-chain matching protocols. Responsible for task categorization, provider selection by stake and reputation, timeout monitoring. Can be decentralized via BFT consensus (scheduler node set).

Result storage — IPFS for result storage with pinning. IPFS link stored on-chain. Alternative for confidential results: encrypted result in IPFS, decryption key passed via TLS in TEE.

Development and Launch

Design phase (2–3 weeks). Choose verification mechanism (TEE / optimistic / ZK), define task categories, token economics. These are decisions unreversible after deployment.

Smart contracts (6–10 weeks). TaskRegistry, WorkerRegistry, Escrow, Consensus module, Voucher system (for task sponsorship). Foundry for testing, including fork-tests with real TEE attestation data.

Worker node (4–8 weeks). Go/Rust daemon with Docker integration. If TEE — Intel DCAP attestation API integration. Critical: node must correctly handle timeout, hung tasks, network partitions.

Real hardware testing (4–6 weeks). Testnet with real worker nodes, load testing of matching layer, slashing verification on dishonest worker.

Audit (4–8 weeks). Focus on: consensus correctness with Byzantine providers, challenge game manipulation possibility, flash loan attacks on stake mechanism.

Full cycle from design to mainnet MVP (TEE-based, no ZK) — 6–10 months with 3–5 engineers.