Akash Network Decentralized Compute Integration

We design and develop full-cycle blockchain solutions: from smart contract architecture to launching DeFi protocols, NFT marketplaces and crypto exchanges. Security audits, tokenomics, integration with existing infrastructure.
Showing 1 of 1 servicesAll 1306 services
Akash Network Decentralized Compute Integration
Medium
~2-3 business days
FAQ
Blockchain Development Services
Blockchain Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1217
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1046
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    823

Akash Network Integration (Decentralized Compute)

Akash Network is a Cosmos-based marketplace for cloud computing. Providers offer GPU/CPU power, clients deploy workloads via SDL manifests, payment in AKT. Key distinction vs. competitors: support for arbitrary Docker containers without custom runtime, which dramatically lowers the barrier for migrating existing applications.

Typical integration use cases: deploying AI inference services (LLM, Stable Diffusion) with crypto payment; decentralized backend for DApp; backup compute capacity during peak load; running blockchain nodes and indexers.

SDL Manifest: Details That Matter

Stack Definition Language (SDL) is YAML format describing deployment on Akash. At first glance similar to docker-compose, but critical differences exist that break deployment if unknown beforehand.

---
version: "2.0"

services:
  inference-api:
    image: your-org/llm-inference:sha256-abc123  # pin by digest, not tag
    expose:
      - port: 8080
        as: 80
        to:
          - global: true
    env:
      - MODEL_PATH=/models/llama-7b
      - MAX_CONCURRENT=4
    resources:
      cpu:
        units: 4.0
      memory:
        size: 16Gi
      storage:
        - size: 50Gi
          attributes:
            persistent: true    # critical for stateful data
            class: beta3        # NVMe storage on Akash

profiles:
  compute:
    inference-api:
      resources:
        cpu:
          units: 4
        memory:
          size: 16Gi
        gpu:
          units: 1
          attributes:
            vendor:
              nvidia:
                - model: rtx3090  # specify concrete model
  placement:
    dcloud:
      pricing:
        inference-api:
          denom: uakt
          amount: 1000  # max price in micro-AKT per block

deployment:
  inference-api:
    dcloud:
      profile: inference-api
      count: 1

SDL Pitfalls:

persistent: true attribute for storage is mandatory for any data that must survive container restart. Without it — ephemeral storage, everything is lost on restart. Class beta3 (NVMe) is significantly faster than beta2 (HDD), IOPS difference is critical for databases and ML models.

Pinning by digest instead of tag is mandatory practice. Providers cache images, and latest can mean different versions at different providers. Using latest tag gives no reproducibility guarantees.

GPU resources available only at some providers. Specifying concrete model (nvidia rtx3090, a100) narrows available provider pool. If GPU not specified explicitly — Akash selects any available GPU provider.

Programming Integration via Akash SDK

For deployment automation from your application — Akash JavaScript SDK or direct calls via Cosmos REST API client.

import { Registry, DirectSecp256k1HdWallet } from "@cosmjs/proto-signing";
import { SigningStargateClient } from "@cosmjs/stargate";
import { MsgCreateDeployment } from "@akashnetwork/akash-api/akash/deployment/v1beta3";

const AKASH_RPC = "https://rpc.akashnet.net:443";
const AKASH_DENOM = "uakt";

async function createDeployment(sdlContent: string, walletMnemonic: string) {
    const wallet = await DirectSecp256k1HdWallet.fromMnemonic(walletMnemonic, {
        prefix: "akash",
    });

    const [account] = await wallet.getAccounts();
    const client = await SigningStargateClient.connectWithSigner(AKASH_RPC, wallet, {
        registry: new Registry(/* akash proto types */),
    });

    // Generate deployment ID
    const dseq = Math.floor(Date.now() / 1000); // block height or timestamp

    const msg = {
        typeUrl: "/akash.deployment.v1beta3.MsgCreateDeployment",
        value: MsgCreateDeployment.fromPartial({
            id: {
                owner: account.address,
                dseq: BigInt(dseq),
            },
            groups: parseSDLGroups(sdlContent),  // parse SDL into proto structures
            deposit: { denom: AKASH_DENOM, amount: "5000000" },  // deposit 5 AKT
        }),
    };

    const result = await client.signAndBroadcast(
        account.address,
        [msg],
        { amount: [{ denom: AKASH_DENOM, amount: "20000" }], gas: "800000" }
    );

    return { dseq, txHash: result.transactionHash };
}

After deployment creation — bidding process. Providers make bids, client selects optimal and creates lease. This is async process: need to subscribe to blockchain events via WebSocket or polling.

async function watchBidsAndCreateLease(dseq: number, ownerAddress: string) {
    // Wait for bids (usually 15–60 seconds)
    const bids = await pollBids(dseq, ownerAddress, { timeoutMs: 120000 });

    if (bids.length === 0) throw new Error("No bids received");

    // Select provider by price (or by attributes)
    const bestBid = bids.sort((a, b) =>
        Number(a.bid.price.amount) - Number(b.bid.price.amount)
    )[0];

    // Create lease with selected provider
    await createLease(bestBid.bid.bidId, wallet);

    // Send manifest to provider directly via their REST API
    await sendManifestToProvider(bestBid.bid.bidId.provider, dseq, sdlContent);
}

Deployment Lifecycle Management

After lease creation, deployment is managed via Provider Service API — HTTP endpoints of provider, not blockchain transactions. Provider endpoint learned from on-chain provider data.

// Get deployment status
async function getDeploymentStatus(providerAddress: string, dseq: number, owner: string) {
    const providerInfo = await queryProviderInfo(providerAddress);
    const providerHost = providerInfo.hostUri;  // https://provider.example.com:8443

    const response = await fetch(
        `${providerHost}/lease/${owner}/${dseq}/1/1/status`,
        {
            headers: { Authorization: `Bearer ${await getProviderToken()}` }
        }
    );

    return response.json();
    // Contains: pod states, IP/ports, logs
}

Deployment monitoring via Akash Provider API gives container status, port forwarding, ability to get logs. For production systems — external monitoring integration (Prometheus/Grafana) via expose endpoints from container.

Workload-Specific Considerations

AI inference services — most popular use case on Akash. Important: models weighing 7–70B parameters don't fit in memory at startup, need persistent storage and proper model loading management. Recommended: separate init container for model download, health check with 5–10 minute timeout.

Blockchain nodes — Ethereum full node, Cosmos validator. Critical: persistent storage sufficient size (Ethereum — 1+ TB), UDP support for P2P (Akash supports proto: UDP in expose). Genesis sync takes days — use snapshot bootstrap.

Stateless backends — easiest case. Horizontal scaling via count: N in SDL. But no native load balancer — need external (Cloudflare, nginx sidecar in same deployment).

Databases — possible, with caveats. No SLA guarantees from Akash providers — provider can go offline. PostgreSQL with replication or managed DB outside Akash for critical data.

Pricing and Expense Monitoring

Cost on Akash forms in uAKT (1 AKT = 1,000,000 uAKT) per block (~6 seconds). For pre-deployment estimation:

async function estimateCost(sdlContent: string): Promise<string> {
    // Use Akash Console API or parse active bids
    const response = await fetch("https://api.cloudmos.io/v1/pricing", {
        method: "POST",
        body: JSON.stringify({ sdl: sdlContent }),
    });
    const { pricePerBlock, currency } = await response.json();
    const dailyCostUAKT = pricePerBlock * (86400 / 6);  // blocks per day
    return `${dailyCostUAKT / 1e6} AKT/day`;
}

For production — automatic deposit replenishment via on-chain trigger or balance monitoring. Deployment closes automatically when deposit exhausted, without user warning.

Smart Contract Integration

For DApp where computation payment happens on-chain with conditional logic — EVM contract can interact with Akash via Cosmos IBC or bridge. More realistic approach: off-chain service listens to EVM contract events and manages Akash deployments.

// Contract accepts payment and emits event for off-chain worker
contract ComputeRequest {
    event ComputeTaskCreated(
        uint256 indexed taskId,
        address indexed requester,
        bytes sdlHash,          // IPFS CID of SDL manifest
        uint256 maxBudgetUSDC
    );

    function requestCompute(bytes calldata sdlCid) external payable {
        uint256 taskId = ++taskCounter;
        // lock payment in escrow
        emit ComputeTaskCreated(taskId, msg.sender, sdlCid, msg.value);
    }
}

Off-chain worker (Node.js/Go) subscribes to events, creates deployment on Akash, upon computation completion calls completeTask() with result and unlocks payment.

Integration Timelines

Basic integration (manual SDL, deployment via CLI) — 1–3 days. Programmatic integration with automatic deployment and monitoring — 1–2 weeks. Full integration with EVM contract, escrow and automatic lifecycle management — 3–5 weeks.