Trading Strategy Parameter Optimization
Writing a strategy is half the work. Finding the parameters at which it works is the other half. Parameter optimization is not magic: it's searching through a value space with correct results validation. The main enemy here is overfitting.
What is Overfitting and Why It's Critical
A strategy with EMA(9, 21) gives Sharpe 1.2 on history. The optimizer cycles through all EMA(5–50) combinations and finds EMA(13, 34) with Sharpe 2.8. Great results? No — this is overfitting. Parameters are tuned to a specific historical period. On real data, the strategy will show results close to random.
Rule: optimize on train set, validate on hold-out (out-of-sample) test set. If test set results are much worse — overfitting.
Walk-Forward Optimization
The most reliable method for time series:
def walk_forward_optimization(data: pd.DataFrame,
strategy_class,
param_grid: dict,
train_periods: int = 180, # days
test_periods: int = 30) -> list:
results = []
start = 0
while start + train_periods + test_periods <= len(data):
train = data.iloc[start:start + train_periods]
test = data.iloc[start + train_periods:start + train_periods + test_periods]
# Optimization on train
best_params = optimize_on_period(strategy_class, train, param_grid)
# Validation on test (OOS)
oos_result = run_backtest(strategy_class, test, best_params)
results.append({
'period': test.index[0],
'params': best_params,
'oos_sharpe': oos_result.sharpe,
'oos_return': oos_result.total_return,
})
start += test_periods # shift window
return results
Walk-forward: optimize on 6 months, test on next month, shift forward by a month, repeat. Final result — median OOS Sharpe across all windows.
Parameter Search Methods
Grid Search
Complete enumeration of all combinations. Simple, but exponentially expensive with many parameters.
from itertools import product
import vectorbt as vbt
param_grid = {
'rsi_period': range(7, 21), # 14 values
'rsi_lower': range(20, 40, 5), # 4 values
'rsi_upper': range(65, 80, 5), # 3 values
}
# Total: 14 * 4 * 3 = 168 combinations — acceptable
# Vectorbt — vectorized backtesting, 168 combinations in seconds
RSI = vbt.IndicatorFactory.from_pandas_ta("rsi")
rsi = RSI.run(close, length=vbt.Param(param_grid['rsi_period']))
Bayesian Optimization
Smarter than grid search: builds a surrogate model of the quality function and selects the next point based on exploration/exploitation balance. Requires fewer iterations for good results.
from optuna import create_study
def objective(trial):
rsi_period = trial.suggest_int('rsi_period', 5, 30)
rsi_lower = trial.suggest_int('rsi_lower', 20, 40)
rsi_upper = trial.suggest_int('rsi_upper', 60, 85)
result = backtest_strategy(data, rsi_period, rsi_lower, rsi_upper)
return result.sharpe_ratio # maximize
study = create_study(direction='maximize', sampler=optuna.samplers.TPESampler())
study.optimize(objective, n_trials=200, n_jobs=4)
print(f"Best params: {study.best_params}")
print(f"Best Sharpe: {study.best_value:.3f}")
Optuna — excellent library for Bayesian optimization. Parallel search, pruning (early stopping of bad trials), visualization of parameter importance.
Optimization Metrics
Don't optimize for total return — this encourages high risk. Best targets:
| Metric | Formula | Comment |
|---|---|---|
| Sharpe Ratio | (Return - Rf) / Std | Gold standard |
| Calmar Ratio | Annual Return / Max Drawdown | Good for trending |
| Sortino Ratio | Return / Downside Std | Penalizes only losses |
| Profit Factor | Gross Profit / Gross Loss | Simple, intuitive |
Combine metrics: score = sharpe * 0.5 + calmar * 0.3 + win_rate * 0.2. This reduces the chance of selecting a strategy that's good by one metric and bad by others.
Parameter Robustness
A good strategy works with small deviations of parameters from optimum. Check:
def check_robustness(best_params: dict, data: pd.DataFrame, delta_pct: float = 0.2):
"""Check if strategy works with ±20% parameter deviation"""
results = []
for param, value in best_params.items():
for multiplier in [0.8, 0.9, 1.0, 1.1, 1.2]:
test_params = best_params.copy()
test_params[param] = int(value * multiplier)
result = backtest_strategy(data, **test_params)
results.append({'param': param, 'multiplier': multiplier, 'sharpe': result.sharpe})
return pd.DataFrame(results)
If Sharpe drops sharply when changing RSI from 14 to 13 or 15 — this is a sign of overfitting. Good strategy shows a smooth sensitivity curve.
Optimization Process
- Data collection: minimum 3–5 years history, different market regimes (bull, bear, sideways)
- Split: 70% train, 30% test (chronologically, not randomly)
- Optimization on train: grid/Bayesian, 100–500 iterations
- Validation on test: if OOS Sharpe < 50% of IS Sharpe — likely overfitting
- Walk-forward check: additional stability validation
- Sensitivity analysis: check parameter robustness
Parameter optimization takes 2–4 weeks: data collection and preparation, optimization framework implementation, walk-forward testing, and documentation.







