QuickNode API Integration
Public RPC endpoints die in production within days of load: Infura's free tier rate limit is 100k requests per day, Alchemy is 300M compute units per month, and public Ethereum RPC has no SLA at all. QuickNode solves this differently: dedicated endpoint, predictable throughput, extended methods, and marketplace add-ons. Integration takes a day, but there are nuances worth knowing in advance.
What QuickNode offers vs public RPC
Dedicated endpoint — no neighbor traffic interference. Your requests don't compete with thousands of others on the same IP.
Add-ons — ready-made overlays: NFT API, Token API, DeFi stats, Trace API. Instead of custom data aggregation, connect a ready endpoint.
Streams — push notifications for new blocks, transactions, events. Instead of polling WebSocket with filters — configure the stream directly in QuickNode interface.
Multi-chain — one account, dozens of networks. Same SDK works with Ethereum, Solana, Polygon, Arbitrum, Base without code changes.
Integration: Practice
QuickNode issues two endpoints per node: HTTPS for JSON-RPC and WSS for WebSocket subscriptions. They work standardly through any Web3 client:
import { createPublicClient, http, webSocket } from 'viem'
import { mainnet } from 'viem/chains'
// HTTP for requests
const httpClient = createPublicClient({
chain: mainnet,
transport: http('https://your-endpoint.quiknode.pro/YOUR_KEY/'),
})
// WebSocket for subscriptions
const wsClient = createPublicClient({
chain: mainnet,
transport: webSocket('wss://your-endpoint.quiknode.pro/YOUR_KEY/'),
})
// Subscribe to new blocks
const unwatch = wsClient.watchBlocks({
onBlock: (block) => console.log('New block:', block.number),
})
Working with Add-ons
NFT API, Token API and other overlays are available through the same endpoints, but via non-standard JSON-RPC methods with qn_ prefix:
// Get NFT collection of wallet through QuickNode NFT API
const response = await httpClient.request({
method: 'qn_fetchNFTs' as any,
params: [{
wallet: '0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045',
page: 1,
perPage: 20,
}],
})
Streams: Setting up push notifications
Streams are configured in QuickNode dashboard. You choose stream type (new blocks, pending transactions, specific events), destination (webhook, Kafka, S3), and filters. This is an alternative to running your own WebSocket server — less code, but less control.
For webhook destinations, QuickNode sends POST with batch records. Important: set up idempotence on your side — QuickNode may resend duplicates on retry.
Managing Limits
QuickNode counts not requests, but credits. Simple eth_blockNumber — 20 credits, eth_getLogs with wide range — up to 200+ credits. Expensive methods: debug_traceTransaction (500 credits), trace_block (200+ credits).
Credit-saving strategies:
- Cache static data:
decimals(),symbol(), historical blocks — they don't change - Use
eth_getLogswith narrow filters, don't scan full range - For polling — WebSocket subscriptions are cheaper than series of
eth_getBlockByNumber - Batch requests via JSON-RPC batch: multiple calls in one HTTP request
// JSON-RPC batch: one HTTP request instead of three
const [block, balance, gasPrice] = await Promise.all([
httpClient.getBlock({ blockNumber: 20000000n }),
httpClient.getBalance({ address: '0x...' }),
httpClient.getGasPrice(),
])
// viem automatically batches with batch option on transport
Monitoring and Failover
QuickNode provides metrics in dashboard, but production needs custom monitoring:
// Wrapper with metrics and failover
class RpcClient {
private endpoints: string[]
private currentIndex = 0
async request(method: string, params: any[]) {
for (let attempt = 0; attempt < this.endpoints.length; attempt++) {
try {
const result = await this.callEndpoint(
this.endpoints[this.currentIndex],
method, params
)
return result
} catch (err) {
// Rotate to next endpoint on error
this.currentIndex = (this.currentIndex + 1) % this.endpoints.length
if (attempt === this.endpoints.length - 1) throw err
}
}
}
}
For serious production: maintain two endpoints — QuickNode primary + Alchemy/Infura fallback. Automatic failover on error rate or latency threshold exceed.







