NVIDIA Isaac Sim Setup for AI Robot Simulation

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
NVIDIA Isaac Sim Setup for AI Robot Simulation
Medium
~2-4 weeks
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

Setting up NVIDIA Isaac Sim for AI Robot Simulation

NVIDIA Isaac Sim is a photorealistic robot simulator based on Omniverse, using RTX rendering and the PhysX physics engine. It enables training AI models in simulation and transferring them to real robots (sim-to-real transfer), dramatically reducing development time and costs.

Installation and requirements

Minimum requirements: NVIDIA RTX 3070+ GPU, CUDA 12.x, 32GB RAM, 100GB SSD. Recommended: NVIDIA A6000 or A100 for batch rendering.

# Установка через Omniverse Launcher
# или через pip для headless режима
pip install isaacsim-rl isaacsim-replicator \
    isaacsim-robot isaacsim-sensor

# Проверка
python -c "import omni.isaac.core; print('Isaac Sim OK')"

Creating a robotic environment for RL learning

from omni.isaac.core import World
from omni.isaac.core.robots import Robot
from omni.isaac.gym.vec_envs import VecEnvBase
import numpy as np

class ManipulatorEnv(VecEnvBase):
    """Pick-and-place среда для обучения манипулятора"""

    def __init__(self, headless: bool = True):
        super().__init__("/World", enable_livestream=not headless)
        self.world = World(stage_units_in_meters=1.0)

    def setup_scene(self):
        # Загрузка URDF/USD робота (Franka Panda)
        self.robot = self.world.scene.add(
            Robot(
                prim_path="/World/Franka",
                name="franka",
                usd_path="/Isaac/Robots/Franka/franka.usd"
            )
        )

        # Объект для захвата
        self.world.scene.add_default_ground_plane()
        self.cube = self.world.scene.add(
            DynamicCuboid(
                prim_path="/World/Cube",
                position=np.array([0.5, 0.0, 0.1]),
                size=np.array([0.05, 0.05, 0.05])
            )
        )

    def get_observations(self) -> dict:
        robot_obs = self.robot.get_joint_positions()
        cube_pos = self.cube.get_world_pose()[0]
        ee_pos = self.robot.get_world_pose()[0]

        return {
            "joint_positions": robot_obs,
            "cube_position": cube_pos,
            "end_effector_position": ee_pos,
            "distance_to_cube": np.linalg.norm(ee_pos - cube_pos)
        }

    def compute_reward(self) -> float:
        obs = self.get_observations()
        distance = obs["distance_to_cube"]

        reward = -distance  # Штраф за расстояние
        if distance < 0.02:  # Успешный захват
            reward += 10.0
        return reward

Synthetic Data Generation (Replicator)

Isaac Replicator generates synthetic training data with automatic labeling:

import omni.replicator.core as rep

# Генерация 10000 изображений для обучения детектора объектов
with rep.new_layer():
    # Случайные объекты
    objects = rep.create.from_usd("/Isaac/Props/YCB/Filtered/")

    # Случайное освещение
    lights = rep.create.light(
        light_type="sphere",
        color=rep.distribution.uniform((0.5, 0.5, 0.5), (1, 1, 1)),
        position=rep.distribution.uniform((-5, -5, 5), (5, 5, 10))
    )

    # Случайные фоны (domain randomization)
    with rep.trigger.on_frame(num_frames=10000):
        with objects:
            rep.randomizer.scatter_2d(surface_prims=[ground])
            rep.randomizer.texture(
                textures=rep.utils.get_usd_files("/Isaac/Environments/")
            )

    # Камера с рендерингом
    camera = rep.create.camera(position=(0, 0, 3))
    render_product = rep.create.render_product(camera, (1280, 720))

    # Аннотации
    rep.annotators.get("rgb").attach(render_product)
    rep.annotators.get("bounding_box_2d_tight").attach(render_product)

rep.orchestrator.run()

Sim-to-Real Transfer

The key technique is Domain Randomization: training with randomized physical parameters (object mass, friction, lighting, sensor noise). This makes the policy robust to real-world variations.

# Рандомизация параметров симуляции
with rep.trigger.on_frame():
    # Варьируем массу объекта
    cube.get_applied_physics_material().set_mass(
        rep.distribution.uniform(0.05, 0.5)
    )
    # Варьируем трение
    cube.get_applied_physics_material().set_dynamic_friction(
        rep.distribution.uniform(0.3, 0.9)
    )

Typical result: a policy trained on 50M steps in Isaac Sim in 24 hours on 8x A100, when transferred to a real robot, achieves an 85-90% success rate on the pick-and-place task without additional training on real hardware.