AI Robotics System

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
AI Robotics System
Complex
from 2 weeks to 3 months
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

AI system for robotics

An industrial manipulator with classic trajectory planning performs a single task in 0.3 seconds reproducibly. The same arm with an ML-based perception + grasp planning system picks up a random unfamiliar object from a bin with a 91% success rate on the first attempt—a quantum leap. This is where the gap between programmable automation and AI robotics lies.

Perception: What the robot sees

6DoF Pose Estimation

To grasp an object, the manipulator must know its precise position and orientation (6 degrees of freedom). RGB-D camera (Intel RealSense D435, Azure Kinect) + RGB-D dataset of specific details. Methods:

  • FoundationPose (NVIDIA): a universal model, works from a single reference image or CAD model without additional training. Accuracy: <5 mm translation, <5° rotation on the YCBv dataset. - Training from scratch: Dope (Deep Object Pose Estimation) or GDR-Net — more accurate on specific details, requires a synthetic dataset with domain randomization (BlenderProc).

The domain gap is the main problem: the model is trained on synthetic data and deployed to real-world factory lighting. Domain randomization (random textures, lighting, backgrounds) and minor real-world fine-tuning solves the problem in 200–500 real-world annotated frames.

Bin Picking with 3D point cloud

Grasping parts from a disordered bin: Open3D + PointNet++ for segmenting individual parts into a point cloud. Grasp: The GraspNet-1Billion model or Contact-GraspNet predicts 6DoF grasp poses with antipodal constraint checking via a collision graph. On steel (shiny surfaces, sensor noise), additional point cloud cleaning is performed: Statistical Outlier Removal + Normal Estimation.

Motion Planning with ML

Learning from Demonstration (LfD)

The operator demonstrates the task once, controlling the manipulator's arm manually (kinesthetic teaching) or via a VR interface. The algorithm records the trajectories and generalizes them using Gaussian Mixture Model (GMM) + Gaussian Mixture Regression (GMR) or Imitation Learning (BC, GAIL). Replay adapts to variations: no reprogramming is required for small changes in the part's position.

Reinforcement Learning for Complex Manipulations

Tasks where trajectory planning doesn't work: inserting a connector (peg-in-hole, 0.1 mm tolerance), screwing without stripping the threads, and moving fragile objects. Sim-to-Real: training in Isaac Gym (NVIDIA) or MuJoCo with randomized friction, mass, and geometry. Transfer to a real robot via domain randomization and minor real-world fine-tuning.

On the industrial connector insertion task, SAC (Soft Actor-Critic) achieves a 95% success rate after 2M simulation steps + 2 hours of real-world training.

Force/Torque control

Force-torque sensor (ATI Mini45, Robotiq FT300) + ML allows detecting assembly anomalies in real time: if the insertion force goes beyond the expected profile → the part is incorrectly oriented → stop before damage.

LSTM on a time series of signals Fx, Fy, Fz, Tx, Ty, Tz: classification of "normal insert" / "skewed" / "incorrect part". Anomaly recall: 0.97, latency: 8 ms — manages to stop motion before damage.

Mobile Robotics and AMR

SLAM and navigation

AMR (Autonomous Mobile Robot): LiDAR SLAM (Cartographer, RTAB-Map) for mapping and localization. ML component: dynamic obstacle prediction (people, forklifts) via object detection (YOLOv8 on fisheye cameras) and velocity estimation.

Fleet Management

A fleet of 30 AMRs: task assignment optimization. Multi-agent RL (MAPPO — Multi-Agent PPO) or MILP for dispatching. Throughput of RL-based vs. rule-based systems: +14% with the same infrastructure.

Stack and integrations

Уровень Технологии
Симуляция Isaac Sim, MuJoCo, Gazebo
Perception ROS 2, Open3D, PyTorch3D
ML Framework PyTorch, JAX
Motion Planning MoveIt 2, OMPL
Robot OS ROS 2 (Humble/Iron)
Коммуникация EtherCAT, PROFINET, OPC-UA
Оркестрация флота Fleet Management System, MQTT

Development timeline: 4–8 months for perception + grasp planning on a specific part/task. A full system with RL-trained manipulations and fleet management: 10–18 months.