AI Service Robot Navigation System (SLAM)

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
AI Service Robot Navigation System (SLAM)
Complex
from 1 week to 3 months
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

AI System for Service Robot Navigation (SLAM)

Simultaneous Localization and Mapping — key technological problem for any mobile robot working without GPS in enclosed spaces. Modern SLAM combined with deep reinforcement learning allows the robot to build an environment map, localize itself on it with 2-5 cm accuracy, and make navigation decisions in dynamic environments with people, carts and changing object arrangements.

SLAM System Architecture

Modern implementations use factor graph approach. Two main branches:

LiDAR-based SLAM

  • SLAM algorithm: Cartographer (Google) or LOAM/LeGO-LOAM for 3D sensors
  • Sensors: Velodyne VLP-16, Ouster OS1, Livox Mid-360
  • Map update frequency: 10-20 Hz
  • Localization accuracy: 2-5 cm in static environment

Visual SLAM (vSLAM)

  • ORB-SLAM3, OpenVINS for stereo/monocular cameras
  • Intel RealSense D435i, Zed 2 as main platforms
  • Fusion with IMU via EKF (Extended Kalman Filter)
  • Works when LiDAR fails (smoke, bright light)

In practice, hybrid approach is used: LiDAR SLAM as primary, vSLAM as backup and for visual verification.

Deep RL for Navigation

Classical planners (A*, Dijkstra, RRT) work well in static environments. Problem — dynamic obstacles: people, moving carts, colleague robots. Here RL comes in.

Task formalization:

  • State: local occupancy grid 64×64 around robot + velocity + vector to goal
  • Actions: linear velocity [0, 0.8 m/s], angular velocity [-1.0, 1.0 rad/s]
  • Reward: progress to goal - penalty for proximity to obstacles - penalty for stopping

Algorithm: SAC (Soft Actor-Critic) — best balance of exploration/exploitation for continuous action spaces. Training in Gazebo/Isaac Sim simulator, sim-to-real transfer via domain randomization.

Technology Stack

Level Components
Hardware Husarion ROSbot, Clearpath Husky, custom platform
Middleware ROS2 Humble, Nav2
SLAM Cartographer / ORB-SLAM3
Planning Nav2 + RL-policy for local planning
Inference NVIDIA Jetson AGX Orin / x86 + GPU
Fleet ROS2 Fleet Management, Formant

Work in Dynamic Environment

Key task — predicting human trajectories for socially acceptable navigation. Social Force Model and its neural network extension — Social LSTM or DESIRE.

Social navigation metrics:

  • Personal Space Intrusion (PSI): fraction of time in zone < 0.5 m from person
  • Path Efficiency: ratio of actual path to optimal
  • Freeze ratio: fraction of time in "frozen before people" state

For service robot in restaurant or hotel PSI should be < 1%, otherwise users perceive it as aggressive.

Multi-robot Coordination

Multiple robots on same area create deadlock situations. Solutions:

  • Centralized: planning server (CBS — Conflict-Based Search) + ROS2 Nav2 Multi-robot
  • Decentralized: ORCA (Optimal Reciprocal Collision Avoidance) — each robot independently resolves conflicts
  • Hybrid: zone division + local ORCA

For warehouse with 10-20 robots, centralized CBS is recommended. For open shopping areas — decentralized ORCA with soft priority zones.

Development Pipeline

Phase 1 (weeks 1-6): Selection and tuning of SLAM algorithm for specific sensor package. Mapping test premises, evaluating localization accuracy.

Phase 2 (weeks 7-14): Creating simulation environment in Isaac Sim with real CAD models of premises. Training RL navigation agent, 20-50M simulation steps.

Phase 3 (weeks 15-20): Sim-to-real transfer on physical robot. Domain randomization: random sensor delays, odometry noise, random furniture placement.

Phase 4 (weeks 21-26): Fleet management, monitoring, integration with operations systems (PMS for hotels, WMS for warehouses).

Final production system metrics: mission success > 97%, average movement speed 0.4-0.6 m/s in crowded places, autonomous operation 8-12 hours per charge.