AI System for Service Robots (HoReCa, Retail)
Service robots for restaurants, hotels and shops work in conditions of constant contact with people. This imposes strict requirements on social behavior, movement predictability and staff interaction. Technically — SLAM + social navigation + task planning united through single control system.
Task Typology by Vertical
Restaurants and cafes:
- Dish delivery from kitchen to tables (Butler, BellaBot-style)
- Collecting dirty dishes from tables
- Guest greeting and seating (hostess robot)
Hotels:
- Delivery of amenities (towels, toothbrushes) to rooms
- Room service order delivery
- Front desk assistant: answering questions, card key distribution
Retail:
- Shelf inventory (Simbe Tally-style)
- Store floor cleaning (Avidbots ARIA)
- Customer assistant: store navigation, product search
Each scenario requires different balance of autonomy and predictability.
Social Navigation
Key problem not technological — legal and psychological: people must trust robot. For this, movements must be understandable and predictable.
Movement models:
- Social Force Model (Helbing, 1995) — classic, fast baseline
- ORCA with social weights — real-time, scales well
- LSTM-based trajectory prediction (Social LSTM, CIDNN) — best result, requires GPU
Practical approach: ORCA for reactive avoidance + LSTM-predictor for proactive maneuvering (robot starts move 3-5 seconds before, not 0.5 second like reactive).
Social navigation parameters:
- Minimum distance to person: 0.6 m (intimate zone boundary)
- Maximum speed in crowded places: 0.5-0.8 m/s
- Stop when < 0.4 m to any object
- Priority rules: children > elderly > adults
Task Management System
Robots in HoReCa receive tasks from operations systems:
- Restaurant: POS system (iiko, r_keeper, Square) → REST API → Task Queue → Robot
- Hotel: PMS (Opera, Protel) → Middleware → Robot Fleet Controller
- Retail: WMS / ERP → Event Stream → Robot Scheduler
Task planning uses multi-criteria optimization: distance + task priority + battery level + current zone congestion. Algorithm: modified Nearest Neighbor with look-ahead on 3-5 tasks forward.
Human-Machine Interaction (HRI)
Screen, lighting and sound — key communication channels:
| Situation | Indication |
|---|---|
| Movement to goal | Green lighting, eye gaze direction |
| Request to move aside | Sound signal, "hand gesture" animation |
| Waiting for elevator | Blinking blue lighting |
| Low battery | Voice message, yellow lighting |
| Delivery completed | Animation, sound, compartment opening |
Voice interface: Whisper for speech-to-text, local LLM (Llama 3 8B quantized) for command interpretation, TTS for responses. All NLU works on-device for privacy.
Elevator and Door Integration
Vertical navigation — separate complexity. Integration with lift management system:
- KONE API / Otis Compass: standard IoT interfaces for lift calling
- Fire doors: Wiegand/OSDP protocol for temporary opening
- Automatic doors: additional IR sensor or BLE trigger
For hotels: integration with room management system for access authorization via BLE or RFID.
Monitoring and Analytics
Operations dashboard:
- Heatmaps of highest activity zones
- Heatmap of wait time by tables/rooms
- KPI: tasks per hour, % rejected tasks, average completion time
Production data learning: all incidents (manual intervention, collisions, stuck) logged and used for fine-tuning navigation policy every 2-4 weeks.
Development timeline: MVP for one scenario (e.g., dish delivery in restaurant) with 1-2 robots — 3-4 months. Expansion to fleet + several scenarios + integration with POS/PMS — 6-9 months.







