Deploying Blockscout
Blockscout is open-source block explorer written in Elixir/Phoenix. Supports any EVM-compatible network: Ethereum, Polygon, Arbitrum, Optimism, and custom networks (Hyperledger Besu, Hardhat, Anvil). De-facto standard for appchains and L2 projects wanting own explorer without Etherscan vendor dependency.
Three system components: indexer (Elixir, writes to DB), API (Elixir/Phoenix REST + GraphQL), frontend (Next.js, separate repo).
Infrastructure Requirements
Minimum requirements depend heavily on network load:
| Profile | CPU | RAM | Disk | Use |
|---|---|---|---|---|
| Dev/staging | 2 vCPU | 8 GB | 50 GB SSD | Test network, small volume |
| Production (small chain) | 4 vCPU | 16 GB | 500 GB SSD | < 1000 txns/day |
| Production (active chain) | 8+ vCPU | 32 GB | 2+ TB NVMe | > 10000 txns/day |
PostgreSQL and Blockscout can be on same server for small chain. For active networks — separate: managed PostgreSQL (RDS, Cloud SQL) + Blockscout instances.
RPC node: Blockscout requires archive node with trace_ namespace support (Erigon, Besu with --rpc-http-api=TRACE, or Geth with --gcmode=archive). Without traces lose internal transactions and token transfers via internal calls.
Docker Compose Deployment
Official path for most deployments:
git clone https://github.com/blockscout/blockscout.git
cd blockscout/docker-compose
Main configuration file — environment variables:
# docker-compose/envs/common-blockscout.env
ETHEREUM_JSONRPC_VARIANT=geth # or besu, erigon
ETHEREUM_JSONRPC_HTTP_URL=http://your-node:8545
ETHEREUM_JSONRPC_TRACE_URL=http://your-node:8545
ETHEREUM_JSONRPC_WS_URL=ws://your-node:8546
DATABASE_URL=postgresql://blockscout:password@db:5432/blockscout
NETWORK=My Network Name
SUBNETWORK=Mainnet
COIN=ETH
COIN_NAME=Ether
# Chain ID
CHAIN_ID=12345
# Block Explorer URL
BLOCKSCOUT_HOST=explorer.yournetwork.com
# API rate limiting (optional)
API_RATE_LIMIT=10
API_RATE_LIMIT_HAMMER_REDIS_URL=redis://redis:6379
# Contract verification
ENABLE_SOURCIFY_INTEGRATION=true
SOURCIFY_SERVER_URL=https://sourcify.dev/server
# Run
docker-compose -f docker-compose.yml up -d
# Monitor indexation
docker-compose logs -f blockscout | grep "blocks"
First run starts indexing from genesis (or block specified in FIRST_BLOCK). For networks with history in millions of blocks indexing takes days — plan ahead.
Configuration for Custom Network
If deploying for your appchain or test network, configure additional parameters:
# Disable external price fetching if token isn't traded
DISABLE_EXCHANGE_RATES=true
# Block update interval (ms)
BLOCKSCOUT_ECTO_USE_SSL=false
# Limits if network has non-standard blocks
BLOCK_TRANSFORMER=base
# If indexing from specific block (not genesis)
FIRST_BLOCK=0
LAST_BLOCK=
For Hyperledger Besu with QBFT:
ETHEREUM_JSONRPC_VARIANT=besu
# Besu requires enabled API modules: --rpc-http-api=ETH,NET,TRACE,DEBUG,ADMIN
Smart Contract Verification
Blockscout supports multiple verification methods:
Via Sourcify — decentralized metadata registry. User uploads sources + metadata.json, Sourcify verifies and saves. Blockscout auto-imports.
Direct verification via API:
curl -X POST "https://explorer.yournetwork.com/api/v2/smart-contracts/verify/via/flattened-code" \
-H "Content-Type: application/json" \
-d '{
"address": "0xCONTRACT_ADDRESS",
"compiler_version": "v0.8.20+commit.a1b79de6",
"source_code": "pragma solidity ^0.8.20;\n...",
"is_optimization_enabled": true,
"optimization_runs": 200,
"contract_name": "MyContract",
"evm_version": "london"
}'
Hardhat Verify plugin — automatic verification after deploy:
// hardhat.config.ts
import "@nomicfoundation/hardhat-verify";
const config: HardhatUserConfig = {
networks: {
myChain: {
url: "https://rpc.yournetwork.com",
chainId: 12345,
}
},
etherscan: {
apiKey: { myChain: "any-string" }, // Blockscout doesn't require real key
customChains: [{
network: "myChain",
chainId: 12345,
urls: {
apiURL: "https://explorer.yournetwork.com/api",
browserURL: "https://explorer.yournetwork.com",
}
}]
}
};
npx hardhat verify --network myChain 0xCONTRACT_ADDRESS "Constructor Arg1"
Production Configuration
Nginx Reverse Proxy
server {
listen 443 ssl http2;
server_name explorer.yournetwork.com;
ssl_certificate /etc/letsencrypt/live/explorer.yournetwork.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/explorer.yournetwork.com/privkey.pem;
# Frontend (Next.js)
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# Backend API
location /api {
proxy_pass http://localhost:4000;
proxy_set_header Host $host;
# WebSocket for real-time updates
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
PostgreSQL Optimization
Blockscout reads and writes actively. Key settings in postgresql.conf:
max_connections = 200
shared_buffers = 4GB # 25% of RAM
effective_cache_size = 12GB # 75% of RAM
maintenance_work_mem = 1GB
checkpoint_completion_target = 0.9
wal_buffers = 64MB
default_statistics_target = 100
random_page_cost = 1.1 # for SSD
work_mem = 50MB
Regular VACUUM ANALYZE — Blockscout creates many dead tuples on re-indexing. Configure autovacuum more aggressively than default.
Monitoring Indexation
# Current status via API
curl https://explorer.yournetwork.com/api/v1/stats | jq
# Check indexer lag
curl https://explorer.yournetwork.com/api/v2/main-page/indexing-status
# Response: {"finished_indexing": true, "indexed_blocks_ratio": "1.00"}
Prometheus metrics: Blockscout exports via /metrics endpoint (Elixir VM metrics + custom).
Updating Blockscout
# Pull new version
git pull origin master
# Stop
docker-compose down
# DB migrations (important — before running new version)
docker-compose run --rm blockscout /app/bin/blockscout eval "Blockscout.ReleaseTasks.create_and_migrate()"
# Run new version
docker-compose up -d
Always read CHANGELOG before updating — some versions require full re-indexing.
Workflow
Day 1: server setup, Docker, PostgreSQL, first Blockscout run on test data.
Day 2: configuration for your network, Nginx + SSL setup, contract verification test.
Day 3: monitoring indexation on production data, Prometheus setup, backup test.
Days 4-5: performance optimization from load testing results, team documentation.
Total 3-5 days including network history indexation and monitoring setup.







