dApp performance optimization

We design and develop full-cycle blockchain solutions: from smart contract architecture to launching DeFi protocols, NFT marketplaces and crypto exchanges. Security audits, tokenomics, integration with existing infrastructure.
Showing 1 of 1 servicesAll 1306 services
dApp performance optimization
Medium
~3-5 business days
FAQ
Blockchain Development Services
Blockchain Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1214
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    823

DApp Performance Optimization

Most dApps are slow for one reason: each component makes its own RPC call, and they don't coordinate. A page with 10 components—10 parallel requests to Infura, each with 100–300ms latency. User sees progressively appearing UI with loading states everywhere. Fixing this is the main optimization task.

RPC Optimization

Multicall and Batching

multicall3 is a contract deployed on most EVM networks at address 0xcA11bde05977b3631167028862bE2a173976CA11. Allows executing N view calls in one RPC request:

import { useReadContracts } from 'wagmi'

// Instead of 3 separate useReadContract:
const { data } = useReadContracts({
  contracts: [
    { address: tokenA, abi: erc20Abi, functionName: 'balanceOf', args: [userAddress] },
    { address: tokenB, abi: erc20Abi, functionName: 'balanceOf', args: [userAddress] },
    { address: tokenC, abi: erc20Abi, functionName: 'balanceOf', args: [userAddress] },
  ],
})

wagmi v2 automatically batches calls through multicall3 if batch: { multicall: true } is enabled in transport config. Check it's enabled—it is by default, but sometimes gets disabled during debugging and forgotten.

Caching with TanStack Query

wagmi is built on TanStack Query, and staleTime / gcTime directly affect RPC request count:

const config = createConfig({
  // ...
  // QueryClient configuration:
})

const queryClient = new QueryClient({
  defaultOptions: {
    queries: {
      staleTime: 12_000,    // data fresh for 12 seconds — no refetch
      gcTime: 5 * 60_000,  // cache lives 5 minutes
      retry: 2,
      retryDelay: attemptIndex => Math.min(1000 * 2 ** attemptIndex, 30_000),
    },
  },
})

For balance data staleTime: 12_000 is reasonable—new block every ~12 seconds. For static data (token metadata, contract addresses)—staleTime: Infinity.

WebSocket Instead of Polling

For real-time data (prices, events), watchContractEvent from wagmi uses eth_subscribe via WebSocket. Dramatically better than polling every N seconds:

useWatchContractEvent({
  address: poolAddress,
  abi: poolAbi,
  eventName: 'Swap',
  onLogs: (logs) => {
    // update price on each swap
    updatePrice(logs)
  },
})

Requires WebSocket-compatible RPC provider—Alchemy, QuickNode, Infura (WSS endpoint).

Bundle Size and Loading

Tree Shaking viem vs ethers.js

viem is designed with tree shaking in mind—import only what you use. ethers v5 on full import adds ~200KB gzipped. viem on typical usage—~40–60KB.

Check with bundle analyzer:

ANALYZE=true next build
# or
npx vite-bundle-analyzer

Typical findings: accidental import of entire ethers instead of specific functions, bigint polyfill duplication, multiple versions of one package in node_modules.

Code Splitting by Routes

Wallet interaction components go under dynamic import:

const WalletModal = dynamic(() => import('@/components/WalletModal'), {
  ssr: false, // mandatory for Web3 components
  loading: () => <Skeleton className="h-10 w-32" />,
})

ssr: false is critical—wallet adapter and wagmi access window.ethereum on init.

React Rendering Optimization

Isolating Re-renders

Main pattern: don't read entire account object where only address is needed.

// Bad — component rerenders on any account change:
const { address, isConnected, chain, connector } = useAccount()

// Good — rerender only on address change:
const { address } = useAccount({ select: (a) => a.address })

select in wagmi hooks works like Redux selector—memoizes derived value.

Memoization of Heavy Computations

// Format large list of positions
const formattedPositions = useMemo(() =>
  rawPositions.map(pos => ({
    ...pos,
    valueUsd: formatUnits(pos.amount * pos.price, 18),
    pnlPercent: ((pos.currentPrice - pos.entryPrice) / pos.entryPrice * 100).toFixed(2),
  })),
  [rawPositions] // only on data change
)

Virtualization of Long Lists

Table with 1000+ transactions without virtualization—guaranteed lag. @tanstack/react-virtual or react-window:

import { useVirtualizer } from '@tanstack/react-virtual'

const rowVirtualizer = useVirtualizer({
  count: transactions.length,
  getScrollElement: () => parentRef.current,
  estimateSize: () => 56, // row height
  overscan: 5,
})

Metrics and Profiling

Before optimizing—measure. React DevTools Profiler shows which components render and how long. Chrome DevTools Network tab—how many RPC requests and their timing. Lighthouse for general load metrics (LCP, TTI).

Typical results after optimization: RPC requests decrease 5–10x through multicall batching, TTI drops 30–50% through code splitting, re-render budget decreases 2–3x through proper selectors.

Work takes 3–5 days: audit current bottlenecks, configure multicall, TanStack Query parameters, bundle analysis and removing unused dependencies, profiling React rendering.