DApp Backend Development with Go
Go in blockchain context — not an obvious choice at first glance: most dApp tutorials use Node.js/TypeScript. But in practice Go excels where reliability under load matters: indexing events, processing webhooks from nodes, off-chain components of keepers and bots. Geth is written in Go, and go-ethereum is the most mature low-level library for working with EVM.
go-ethereum: Key Patterns
Connection and Reading
client, err := ethclient.Dial("wss://eth-mainnet.g.alchemy.com/v2/KEY")
// For production — fallback between multiple providers
// Read contract data via generated binding
tokenABI, _ := abi.JSON(strings.NewReader(ERC20ABI))
contract := bind.NewBoundContract(tokenAddress, tokenABI, client, client, client)
var balanceResult []interface{}
contract.Call(nil, &balanceResult, "balanceOf", userAddress)
balance := balanceResult[0].(*big.Int)
In practice — use abigen to generate Go bindings from ABI. This gives typed methods instead of interface{}:
abigen --abi=./abi/Token.json --pkg=token --out=./contracts/token.go
token, _ := token.NewToken(tokenAddress, client)
balance, _ := token.BalanceOf(nil, userAddress) // typed
Event Subscriptions
WebSocket subscription to events — the foundation of indexers:
query := ethereum.FilterQuery{
Addresses: []common.Address{contractAddress},
Topics: [][]common.Hash{{
crypto.Keccak256Hash([]byte("Transfer(address,address,uint256)")),
}},
}
logs := make(chan types.Log)
sub, err := client.SubscribeFilterLogs(ctx, query, logs)
for {
select {
case err := <-sub.Err():
// reconnect logic
case log := <-logs:
processTransferEvent(log)
}
}
Critical: WebSocket connections drop. You need reconnect logic with exponential backoff. For production — a separate goroutine monitoring the subscription state and recreating it on disconnection.
Indexer Service Architecture
Typical use case: collect smart contract events, store in PostgreSQL, provide REST/GraphQL API for frontend.
Service Structure
cmd/
indexer/main.go — entry point
api/main.go — HTTP server
internal/
indexer/ — event processing logic
repository/ — data layer (PostgreSQL)
blockchain/ — go-ethereum client
api/handlers/ — HTTP handlers
Handling Reorganizations (Reorgs)
This is the most non-obvious thing for developers without blockchain experience. Blocks can be reorganized — a transaction in block 100 might disappear if a reorg happens. A naive indexer not accounting for reorgs will accumulate incorrect data.
Solution: don't mark blocks as "finalized" immediately. Wait for N confirmations (12 for Ethereum, 3 for Polygon, 1 for Arbitrum with its finalization). Store block_hash alongside event data. When a reorg is detected — rollback all records with changed block_hash.
type IndexedEvent struct {
ID int64
BlockNumber uint64
BlockHash common.Hash
TxHash common.Hash
LogIndex uint
Data []byte
Finalized bool
}
Periodically query eth_getBlockByNumber for the latest N blocks and compare block_hash with stored values.
Backfilling Historical Data
On first run or after gaps — you need to traverse historical blocks. FilterLogs with a block range, but with a limit on range size (nodes usually limit to 2000-10000 blocks). Parallel backfill with worker pool:
func backfill(ctx context.Context, from, to uint64, workers int) {
chunks := make(chan [2]uint64, workers)
var wg sync.WaitGroup
for i := 0; i < workers; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for chunk := range chunks {
logs, _ := client.FilterLogs(ctx, ethereum.FilterQuery{
FromBlock: new(big.Int).SetUint64(chunk[0]),
ToBlock: new(big.Int).SetUint64(chunk[1]),
Addresses: []common.Address{contractAddress},
})
processLogs(logs)
}
}()
}
// Split [from, to] into chunks and send to channel
}
Transaction Signing and Sending
For off-chain components (keepers, automated transactions) — private key management in backend:
privateKey, _ := crypto.HexToECDSA(os.Getenv("PRIVATE_KEY"))
auth, _ := bind.NewKeyedTransactorWithChainID(privateKey, chainID)
// EIP-1559 pricing
tip, _ := client.SuggestGasTipCap(ctx)
auth.GasTipCap = tip
auth.GasFeeCap = new(big.Int).Add(baseFee, tip) // baseFee from latest block
tx, err := contract.SomeMethod(auth, arg1, arg2)
For production: AWS KMS or HashiCorp Vault instead of env variable for key storage. Nonce management — a separate topic: with parallel transaction sending, you need a nonce manager that atomically issues the next nonce and handles dropped/stuck transactions.
API Layer
// Standard stack: chi or gin for routing, pgx/v5 for PostgreSQL
r := chi.NewRouter()
r.Use(middleware.Logger)
r.Use(middleware.RealIP)
r.Use(cors.Handler(cors.Options{
AllowedOrigins: []string{"https://app.example.com"},
AllowedMethods: []string{"GET", "POST"},
}))
r.Get("/api/v1/events", handlers.GetEvents)
r.Get("/api/v1/user/{address}/positions", handlers.GetUserPositions)
WebSocket endpoint for real-time updates — gorilla/websocket or nhooyr.io/websocket. One goroutine per connection, channel-based broadcast from indexer to WebSocket clients.
Development Timeline
Simple indexer + REST API (one contract, 3-5 endpoints): 3-4 days. Full service with backfill, reorg handling, WebSocket, nonce manager: 1.5-2 weeks.







