DQN/DDQN-Based RL Trading Agent Development

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
DQN/DDQN-Based RL Trading Agent Development
Complex
~2-4 weeks
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

Trading Agent with DQN (Deep Q-Network)

DQN is the first deep RL algorithm to demonstrate superhuman performance (Atari, DeepMind 2015). For trading: discrete action space (buy/sell/hold), experience replay, target network. Suitable for single-asset trading with clear entries/exits.

DQN for Trading

Original DQN works with discrete actions. This makes it natural for signal strategies:

Action space:

  • 0: Hold (do nothing)
  • 1: Buy (open long position)
  • 2: Sell / Close (close position / open short)

For single-asset this makes sense. For multi-asset, need DQN with factored action space or switch to SAC/PPO.

Q-function: Q(s, a) — expected total discounted reward from state s with action a.

Architecture

import torch
import torch.nn as nn

class DQNTrading(nn.Module):
    def __init__(self, state_dim, n_actions=3, hidden=256):
        super().__init__()
        # Dueling DQN architecture
        self.feature = nn.Sequential(
            nn.Linear(state_dim, hidden), nn.ReLU(),
            nn.Linear(hidden, hidden), nn.ReLU()
        )
        # Value stream: V(s)
        self.value = nn.Sequential(
            nn.Linear(hidden, 128), nn.ReLU(),
            nn.Linear(128, 1)
        )
        # Advantage stream: A(s, a)
        self.advantage = nn.Sequential(
            nn.Linear(hidden, 128), nn.ReLU(),
            nn.Linear(128, n_actions)
        )

    def forward(self, x):
        feat = self.feature(x)
        V = self.value(feat)
        A = self.advantage(feat)
        # Q = V + (A - mean(A))
        return V + (A - A.mean(dim=1, keepdim=True))

Dueling DQN: separates V(s) and A(s,a). In trading: market state often determines overall "value" (V), while action choice — relative advantage (A). Usually converges faster.

Experience Replay and Target Network

Two key DQN innovations:

Experience replay buffer:

from collections import deque
import random

class ReplayBuffer:
    def __init__(self, capacity=100_000):
        self.buffer = deque(maxlen=capacity)

    def push(self, state, action, reward, next_state, done):
        self.buffer.append((state, action, reward, next_state, done))

    def sample(self, batch_size):
        batch = random.sample(self.buffer, batch_size)
        states, actions, rewards, next_states, dones = zip(*batch)
        return (torch.FloatTensor(np.array(states)),
                torch.LongTensor(actions),
                torch.FloatTensor(rewards),
                torch.FloatTensor(np.array(next_states)),
                torch.FloatTensor(dones))

Target network (frozen copy of Q-network):

# update every C steps
if step % target_update_freq == 0:
    target_net.load_state_dict(online_net.state_dict())

Without target network: Q-targets move simultaneously with Q-predictions → instability → divergence.

Training

def train_step(batch, online_net, target_net, optimizer, gamma=0.99):
    states, actions, rewards, next_states, dones = batch

    # current Q-values
    q_values = online_net(states).gather(1, actions.unsqueeze(1))

    # Double DQN: online selects action, target evaluates
    with torch.no_grad():
        next_actions = online_net(next_states).argmax(1)
        next_q = target_net(next_states).gather(1, next_actions.unsqueeze(1))
        target_q = rewards.unsqueeze(1) + gamma * next_q * (1 - dones.unsqueeze(1))

    loss = nn.SmoothL1Loss()(q_values, target_q)  # Huber loss
    optimizer.zero_grad()
    loss.backward()
    nn.utils.clip_grad_norm_(online_net.parameters(), 10)  # gradient clipping
    optimizer.step()
    return loss.item()

Double DQN: eliminates original DQN's overestimation bias. In noisy financial environments this is critical — without Double DQN, Q-values are systematically overestimated.

Epsilon-Greedy for Financial Environments

# Exponential decay epsilon
epsilon = max(epsilon_min, epsilon_start * (epsilon_decay ** step))

if np.random.random() < epsilon:
    action = env.action_space.sample()  # random exploration
else:
    with torch.no_grad():
        q_vals = online_net(state_tensor)
        action = q_vals.argmax().item()

Financial epsilon specifics:

  • epsilon_start = 1.0 (full exploration initially)
  • epsilon_min = 0.01 (1% random actions always)
  • Slow decay — markets are harder than Atari

Rainbow DQN

Combination of all improvements: Double + Dueling + PER + Multi-step + Distributional + Noisy Networks.

For trading, most valuable are:

  • Distributional (C51/QR-DQN): predicts distribution of returns, not just mean. Risk-aware policy: agent sees not only expected profit but also volatility.
  • Multi-step returns (n=3–5): less sparse reward, better credit assignment.
  • PER: prioritizes rare market events (large moves).

When DQN, When SAC/PPO

DQN is appropriate for: single-asset, clear buy/sell signals, small action space (3–10 actions), binary decision making.

SAC/PPO preferable for: multi-asset portfolio, continuous position sizing, when position size matters.

Timeline: 4–8 weeks

Basic DQN agent — 2–3 weeks. Rainbow with PER, Distributional, multi-step — 6–8 weeks. Live trading integration with risk management — additional 3–4 weeks.