Horovod Distributed Training Setup

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
Horovod Distributed Training Setup
Complex
~3-5 business days
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

Setting up Horovod for distributed learning

Horovod is a distributed learning framework from Uber that supports TensorFlow, Keras, PyTorch, and MXNet. Its key advantage is a unified API across different frameworks and an optimized ring-allreduce implementation for gradient aggregation.

Installation

# Зависимости
apt install -y g++ openmpi-bin libopenmpi-dev

# Установка с поддержкой NCCL и gloo
HOROVOD_GPU_OPERATIONS=NCCL pip install horovod[tensorflow,keras,pytorch,mxnet]

# Проверка
horovodrun --check-build

Integration with PyTorch

import torch
import horovod.torch as hvd

# Инициализация
hvd.init()

# Привязка к GPU по rank
torch.cuda.set_device(hvd.local_rank())

# Масштабирование learning rate пропорционально числу GPU
lr = 1e-3 * hvd.size()
optimizer = torch.optim.SGD(model.parameters(), lr=lr)

# Оборачивание оптимизатора — добавляет all-reduce градиентов
optimizer = hvd.DistributedOptimizer(
    optimizer,
    named_parameters=model.named_parameters(),
    compression=hvd.Compression.fp16  # Сжатие градиентов
)

# Broadcast начальных весов с rank 0 на все GPU
hvd.broadcast_parameters(model.state_dict(), root_rank=0)
hvd.broadcast_optimizer_state(optimizer, root_rank=0)

# Сохранение только на rank 0
if hvd.rank() == 0:
    torch.save(model.state_dict(), "model.pt")

Launch

# Один узел, 4 GPU
horovodrun -np 4 -H localhost:4 python train.py

# Несколько узлов
horovodrun -np 16 -H server1:8,server2:8 \
  --network-interface eth0 \
  python train.py

# С MPI
mpirun -np 16 \
  -H server1:8,server2:8 \
  -bind-to none -map-by slot \
  -x NCCL_DEBUG=INFO \
  -x LD_LIBRARY_PATH \
  python train.py

Horovod Elastic Training

Elastic training allows you to dynamically add and remove nodes during training without stopping:

import horovod.torch as hvd
from horovod.torch.elastic import run

@hvd.elastic.run
def train(state):
    # state.epoch и state.batch сохраняются между resizing
    for state.epoch in range(state.epoch, num_epochs):
        for state.batch, batch in enumerate(
            get_loader(state.epoch, state.batch), state.batch
        ):
            train_step(batch)
            state.commit()  # Checkpoint состояния

state = hvd.elastic.TorchState(
    model=model,
    optimizer=optimizer,
    epoch=0,
    batch=0
)

run(train, state)

Timeline profiler

Horovod includes a built-in profiler for analyzing communication overhead:

HOROVOD_TIMELINE=timeline.json horovodrun -np 4 python train.py
# Открыть chrome://tracing и загрузить timeline.json

The Timeline shows the time of each allreduce operation, which helps to find bottlenecks - layers with slow synchronization.

Comparison with alternatives

Horovod was historically popular before the advent of PyTorch DDP and DeepSpeed. Today, for new PyTorch projects, PyTorch DDP (native integration) or DeepSpeed (for large models) are preferable. Horovod remains relevant for:

  • Existing TensorFlow codebases with distributed training
  • Multi-framework environments (PyTorch + TensorFlow simultaneously)
  • Environments with MPI infrastructure (HPC clusters with SLURM)

When migrating from Horovod to PyTorch DDP: the main change is replacing hvd.DistributedOptimizer with torch.nn.parallel.DistributedDataParallel and using torchrun instead of horovodrun.