Multi-Network Contract Monitoring System Development

We design and develop full-cycle blockchain solutions: from smart contract architecture to launching DeFi protocols, NFT marketplaces and crypto exchanges. Security audits, tokenomics, integration with existing infrastructure.
Showing 1 of 1 servicesAll 1306 services
Multi-Network Contract Monitoring System Development
Medium
~3-5 business days
FAQ
Blockchain Development Services
Blockchain Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1218
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    853
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1047
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    823

Development of Multi-Network Contract Monitoring System

Multi-network monitoring is not simply "run several Etherscan tabs". When protocol works simultaneously on Ethereum, Arbitrum, Polygon, Base and Optimism, with bridged assets moving between them, delay in detecting anomaly in one network can cost user funds in another. The problem isn't data access — data is available. The problem is correct aggregation, event correlation between networks, and response speed.

Monitoring System Architecture

Data Sources

RPC nodes — direct calls to EVM nodes via WebSocket for real-time events. Each network needs reliable RPC with eth_subscribe support:

// Subscribe to contract events via WebSocket
const provider = new ethers.WebSocketProvider(RPC_WS_URL);
const contract = new ethers.Contract(address, abi, provider);

contract.on('Transfer', (from, to, value, event) => {
  emitEvent({
    network: 'arbitrum',
    block: event.log.blockNumber,
    txHash: event.log.transactionHash,
    type: 'Transfer',
    data: { from, to, value }
  });
});

Problem with single RPC: public nodes unreliable, skip events under load. Solution: minimum 2 independent providers per network (Alchemy + QuickNode, or own node). Deduplication by (chainId, txHash, logIndex).

The Graph / Subgraph — for historical data and complex queries. Mediation layer over raw RPC. Latency 1–3 blocks, ideal for analytical queries and cross-network balance verification.

Network defaults:

Network Block time Recommended RPC Finality
Ethereum ~12 sec Alchemy/Infura ~64 blocks (~13 min)
Arbitrum One ~0.25 sec Arbitrum RPC / Alchemy L1 finality
Polygon PoS ~2 sec Polygon RPC / QuickNode ~256 blocks
Base ~2 sec Base RPC / Alchemy L1 finality
Optimism ~2 sec Optimism RPC / Alchemy L1 finality

Event Processing Pipeline

Raw events can't be analyzed immediately — need normalization and enrichment:

RPC Listener → Message Queue (Kafka/Redis Streams) → Event Processor → Alert Engine → Notification
                                                    ↓
                                              Time-series DB (InfluxDB/TimescaleDB)
                                                    ↓
                                              Analytics Dashboard

Message Queue buffers spikes. During sudden on-chain activity (e.g., large liquidation cascade) events arrive faster than they can be processed. Kafka with 24h retention allows replay.

Event Processor normalizes events from different networks into unified format, decodes ABI, enriches (token prices, account metadata), detects anomalies.

Alert Engine applies rules to normalized events. Stateful rules need state store (Redis). Examples:

class LargeTransferAlert(AlertRule):
    def evaluate(self, event: NormalizedEvent) -> Optional[Alert]:
        if event.type != 'Transfer':
            return None
        usd_value = event.data['value'] * get_token_price(event.data['token'])
        threshold = self.get_dynamic_threshold(
            token=event.data['token'],
            window='24h',
            multiplier=10.0  # 10x from 24h average
        )
        if usd_value > threshold:
            return Alert(
                severity='HIGH',
                message=f'Large transfer: ${usd_value:,.0f} on {event.network}',
                context=event
            )

Cross-Chain Correlation

Most valuable functionality for multi-network protocols — linking events between networks. Typical scenarios:

Bridge monitoring — token locked on Ethereum, should appear on Arbitrum. If doesn't appear in N minutes — alert.

class BridgeCorrelator:
    def __init__(self, redis_client):
        self.pending = {}  # bridge_tx_hash -> pending event

    def on_bridge_initiated(self, event):
        key = f"bridge:{event.src_chain}:{event.tx_hash}"
        self.redis.setex(key, 3600, json.dumps(event.to_dict()))

    def on_bridge_completed(self, event):
        key = f"bridge:{event.src_chain}:{event.bridge_nonce}"
        pending = self.redis.get(key)
        if not pending:
            alert(f"Bridge completion without initiation: {event}")
            return
        initiation = json.loads(pending)
        latency = event.timestamp - initiation['timestamp']
        if latency > EXPECTED_BRIDGE_LATENCY[event.bridge_protocol]:
            alert(f"Bridge latency anomaly: {latency}s")

TVL consistency check — sum TVL on L2s shouldn't exceed locked amount on L1. Periodic check via subgraph queries with alert if discrepancy > X%.

What to Monitor: Practical List

Security-Critical Events

  • Ownership transfersOwnershipTransferred, RoleGranted on any protocol contract
  • Upgrade proposals — events from Timelock (new proposals, execution)
  • Large withdrawals — withdrawal > N% TVL in short period
  • Flash loan usage — flash loan receipt + protocol interaction in one tx
  • Oracle price deviations — protocol price deviates from market > X%
  • Pause events — someone paused contract

Operational Metrics

  • Gas usage anomalies (spike may mean inefficient execution or attack)
  • Failed transactions ratio (rise in failed tx at router may mean UI/API bug)
  • Block inclusion latency for own transactions (keeper bots, liquidation bots)

Business Metrics

  • TVL dynamics per network
  • Volume per network
  • Unique active addresses
  • Protocol revenue (fees collected)

Technical Stack

OpenZeppelin Defender — if protocol uses OZ, Defender Sentinel covers basic monitoring with minimal setup. Limitation: weak cross-chain correlation, no custom analytics.

Tenderly Alerts — good for development/staging, covers main networks. For production critical systems — build on top.

Own system — justified when: need cross-chain correlation, specific business logic in alerts, integration with internal systems, >10 contracts on >3 networks.

Stack for custom system:

  • Event ingestion: Node.js + ethers.js WebSocket listeners
  • Message queue: Redis Streams (small projects) or Kafka (high load)
  • Storage: TimescaleDB for time-series, PostgreSQL for metadata
  • Alert rules: Python with rule engine
  • Notifications: PagerDuty/OpsGenie for critical, Telegram/Discord for operational
  • Dashboard: Grafana over TimescaleDB

Alert Response: Not Just Notifications

Monitoring without automatic response is half-system. OpenZeppelin Defender Autotask or custom keeper bot:

  • Abnormally large withdrawal → auto-pause contract (if pauser configured on keeper)
  • Oracle deviation → switch to fallback oracle
  • Bridge stuck > 2 hours → notify bridge operator + create ticket

Automatic response requires careful audit of keeper bot itself — it becomes critical security element.

Dashboards

Grafana dashboards by structure: Overview (all nodes, all networks, status at glance), Per-network deep dive (detailed metrics per network), Validator performance (for staking nodes, including APR and slashing risks), Infrastructure (CPU/RAM/Disk per node).

For public RPC services additionally: request metrics (RPS, latency, error rate), rate limiting stats, top methods by load.

Development Timeline

Component Timeline
Basic exporters (EVM + 1–2 other networks) 1–2 weeks
Prometheus + VictoriaMetrics + Grafana 3–5 days
Alert rules + PagerDuty/Telegram integration 2–3 days
Auto-failover for RPC 1 week
Dashboards + documentation 1 week

Monitoring for 3–5 networks with basic dashboards and alerts — 3–4 weeks. Advanced system with auto-remediation and custom exporters for non-standard protocols — 6–8 weeks.