Developing IPFS Infrastructure
IPFS in production differs from IPFS in tutorial about as much as a working server differs from npm start. Main problem: content addressing guarantees data won't change, but doesn't guarantee availability. If node pinning your CID crashes — data unavailable. For NFT metadata that should live forever, this is unacceptable.
Architecture: What You Actually Need
┌─────────────────────────────────────────────────────┐
│ Your Application │
│ │
│ Upload: File → IPFS Node → CID → Smart Contract │
│ Fetch: CID → IPFS Gateway → File │
└───────────────────┬─────────────────────────────────┘
│
┌───────────┼───────────┐
▼ ▼ ▼
Your IPFS Pinata / Cloudflare
Node Web3.Storage IPFS GW
(primary) (redundancy) (public GW)
Principle: upload via own node + pin in multiple services for redundancy. Serve via own gateway or Cloudflare.
Own IPFS Node: go-ipfs / Kubo
# Install Kubo (go-ipfs)
wget https://dist.ipfs.tech/kubo/v0.27.0/kubo_v0.27.0_linux-amd64.tar.gz
tar xvzf kubo_v0.27.0_linux-amd64.tar.gz
cd kubo && sudo bash install.sh
# Initialize
ipfs init --profile server
# Optimize config for server (not desktop)
ipfs config Addresses.API /ip4/127.0.0.1/tcp/5001
ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin '["*"]'
ipfs config --json Swarm.ConnMgr.HighWater 200
ipfs config --json Swarm.ConnMgr.LowWater 100
# Disable automatic GC (manage manually)
ipfs config --json Datastore.GCPeriod '"0s"'
Systemd unit:
[Unit]
Description=IPFS Daemon
After=network.target
[Service]
User=ipfs
Environment=IPFS_PATH=/data/ipfs
ExecStart=/usr/local/bin/ipfs daemon --migrate=true --enable-gc=false
Restart=on-failure
RestartSec=10
[Install]
WantedBy=multi-user.target
Upload and Pinning
import { create } from 'kubo-rpc-client';
import * as fs from 'fs';
const ipfs = create({ url: 'http://localhost:5001' });
async function uploadFile(filePath: string): Promise<string> {
const file = fs.readFileSync(filePath);
const result = await ipfs.add(file, {
pin: true, // pin immediately
cidVersion: 1, // CIDv1 more modern, base32 encoding
});
return result.cid.toString();
}
// Upload directory (e.g., NFT metadata folder)
async function uploadDirectory(dirPath: string): Promise<string> {
const files = [];
for (const filename of fs.readdirSync(dirPath)) {
files.push({
path: filename,
content: fs.readFileSync(`${dirPath}/${filename}`),
});
}
let rootCid = '';
for await (const file of ipfs.addAll(files, {
wrapWithDirectory: true,
pin: true,
cidVersion: 1,
})) {
if (file.path === '') { // root directory
rootCid = file.cid.toString();
}
}
return rootCid; // ipfs://rootCid/filename.json
}
Remote Pinning: Pinata and Web3.Storage
Local pinning insufficient. If your server crashes — no data. Pin in two-three services:
// Pin to Pinata via Remote Pin API (IPFS standard)
async function pinToPinata(cid: string, name: string): Promise<void> {
const response = await fetch('https://api.pinata.cloud/psa/pins', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.PINATA_JWT}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
cid,
name,
origins: [`/dns4/your-ipfs-node.example.com/tcp/4001/p2p/${YOUR_NODE_ID}`],
}),
});
if (!response.ok) throw new Error(`Pinata pin failed: ${response.statusText}`);
}
// Check pin status
async function checkPinStatus(cid: string): Promise<string> {
const response = await fetch(`https://api.pinata.cloud/psa/pins?cid=${cid}`, {
headers: { 'Authorization': `Bearer ${process.env.PINATA_JWT}` },
});
const data = await response.json();
return data.results[0]?.status ?? 'not_found'; // 'queued'|'pinning'|'pinned'|'failed'
}
Pinning strategy for NFT project:
- Own node — primary storage, fast access
- Pinata — first backup, reliable service, paid
- web3.storage (Filecoin-backed) — second backup, stores on Filecoin with verifiable deal receipts
- Nft.storage — specialized for NFT, Filecoin + IPFS
IPFS Gateway
Browsers don't understand ipfs:// natively. Need HTTP gateway:
# Nginx as reverse proxy for IPFS gateway
server {
listen 443 ssl;
server_name ipfs.yourdomain.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
# Cache IPFS content — immutable, can cache forever
proxy_cache ipfs_cache;
proxy_cache_valid 200 365d;
add_header Cache-Control "public, max-age=31536000, immutable";
# CORS for browser fetch
add_header Access-Control-Allow-Origin "*";
}
}
Cloudflare also provides IPFS gateway — via cloudflare-ipfs.com/ipfs/{CID} or via Workers with custom domain. Good way to get CDN on top of IPFS.
NFT Metadata: Standard and Structure
{
"name": "Token #42",
"description": "...",
"image": "ipfs://bafybeig.../image.png",
"attributes": [
{ "trait_type": "Background", "value": "Blue" }
]
}
Image — separate CID, metadata — separate CID. In smart contract store baseURI = "ipfs://bafybei.../", tokenURI returns baseURI + tokenId + ".json".
Garbage Collection and Monitoring
# Manual GC (run on schedule, don't leave on auto)
ipfs repo gc
# Repository statistics
ipfs repo stat
# List all local pins
ipfs pin ls --type=recursive | wc -l
# Check if specific CID available locally
ipfs pin ls bafybeig... 2>/dev/null && echo "pinned" || echo "not pinned"
Monitoring: Prometheus exporter for IPFS (ipfs stats bw, ipfs stats repo) + alert on ipfs swarm peers — if <10 peers, node isolated. Alert if pin count unexpectedly decreased (GC deleted something important).
When IPFS Not Needed
IPFS overkill for temporary data (user avatars that change), data requiring deletion (GDPR), frequently-updated data. For this — regular S3 or CDN. IPFS makes sense for: NFT metadata and media (immutability matters), smart contract artifacts (ABI, bytecode), decentralized storage with verifiability.







