GPU Server AI Development Environment Setup CUDA PyTorch TensorFlow

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
GPU Server AI Development Environment Setup CUDA PyTorch TensorFlow
Simple
~1 business day
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

Setting Up a GPU Server for AI Development: CUDA, PyTorch, TensorFlow

Setting up a GPU server for AI development takes 2-4 hours of work, saving days of "it works in the cloud, but it doesn't work locally" issues. Key components: the correct NVIDIA driver + CUDA + cuDNN versions, isolated Python environments, and GPU monitoring tools.

Minimum stack

# 1. NVIDIA Driver (Ubuntu 22.04)
sudo apt install -y nvidia-driver-545
sudo reboot

# 2. CUDA 12.2
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin
sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/3bf863cc.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/ /"
sudo apt install -y cuda-toolkit-12-2

# 3. Добавить в ~/.bashrc
echo 'export PATH=/usr/local/cuda-12.2/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda-12.2/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc

# 4. Проверка
nvidia-smi && nvcc --version

Conda environments for different frameworks

# PyTorch
conda create -n pytorch python=3.11 -y
conda activate pytorch
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

# TensorFlow
conda create -n tensorflow python=3.11 -y
conda activate tensorflow
pip install tensorflow[and-cuda]==2.15.0

# Проверка GPU доступности
python -c "import torch; print('PyTorch GPU:', torch.cuda.get_device_name(0))"
python -c "import tensorflow as tf; print('TF GPUs:', tf.config.list_physical_devices('GPU'))"

GPU monitoring

# Установка nvtop — htop для GPU
sudo apt install nvtop
nvtop  # Интерактивный мониторинг

# gpustat — компактный вывод
pip install gpustat
gpustat --watch  # Обновление каждую секунду
watch -n 1 nvidia-smi  # Классический вариант

Performance optimization

Persistence mode - eliminates delays when first accessing the GPU:

sudo nvidia-smi -pm 1
# Добавить в /etc/rc.local для автозапуска

sudo nvidia-smi --auto-boost-default=0 — disable boosting for deterministic benchmark results. For maximum performance, sudo nvidia-smi -ac 1215,1410 (optimal frequencies for the A100).