tags: [vr-ar]
Enemy AI programming for 3D space navigation in games
NavMeshAgent stuck in corner. Or walking in circles around obstacle when could bypass in second. Or teleporting through wall on scene change. AI navigation problems in 3D are not "something went wrong", but specific architectural decisions either accounting for NavMesh limitations or not.
In VR specifics sharper: player physically in game space, moves themselves, and enemies must react to movement immediately. Enemy that "thinks" 200 ms before turning is noticeable in VR like never before.
Typical NavMeshAgent failures and causes
Stuck at NavMeshSurface seam. If scene uses multiple NavMeshSurface (different areas, different levels), agent stuck when transitioning. Reason — NavMesh seams don't auto-stitch. Either use single surface with NavMeshLink bridges, or explicitly add NavMeshLink components at transition points.
Agent doesn't update path on player movement. SetDestination called once on player detection, agent goes to point where player was. Need periodic path update — every 0.2–0.5 seconds call SetDestination with current target position. Too often — CPU overhead from constant recalculation. Too rare — enemy "lags". Compromise: update only if target moved destinationChangeThreshold meters.
Agent walks through other agents. NavMeshAgent.radius is obstacle avoidance radius, but only works with NavMeshObstacle and other agents. If agent has lower avoidancePriority, yields and can "sink" into objects. Set priorities: rank-and-file enemies — 50, bosses — 20, player — 10 (higher priority = lower number = others yield).
Steering Behaviour over NavMesh
NavMesh gives global path from A to B. But movement along path is separate task. Standard NavMeshAgent.steeringTarget — next path point — can give angular movement: agent makes sharp 90° turn instead of smooth.
Solution: disable NavMeshAgent.updateRotation, rotate agent yourself via Quaternion.RotateTowards with angular velocity matching character. Slow zombie — 60°/sec. Fast fighter — 180°/sec. Immediately looks more convincing.
For complex behavior — Steering Behaviours over NavMesh: Seek (to target), Flee (from target), Separation (diverge from peers). Separation especially important for enemy group in VR — without it they all teleport to one point, looks unrealistic. Separation implemented as additional force added to desired velocity: take all agents in separationRadius, sum vectors "from each", normalize and add as offset to NavMeshAgent.velocity.
Behaviour Tree vs State Machine for VR enemies
State Machine (via Animator Controller with parameters or code) — simpler, scales poorly. With 5+ states transitions become spaghetti.
Behaviour Tree (BT) — hierarchical task structure. Enemy: Selector → [Attack if distance < 2m → Chase if sees player → Patrol otherwise]. Each node — Sequence (all children must succeed), Selector (one success enough), or Leaf (specific action). Unity has no built-in BT, but Behavior package (Unity, 2024+) or open NPBehave, Fluid Behaviour Tree exist.
For VR important: BT must update at different rates by distance to player. Enemy at 30 m — BT update once per second. Enemy at 3 m — every 100 ms. LOD-driven AI update rate via AIDirector or simple DistanceBasedUpdateRate component reduces CPU load from many active agents.
Spatial awareness: hearing and sight
Vision cone via Physics.OverlapSphere + angle check + Linecast for visibility. Standard: collect all potential targets in sightRange, filter by Vector3.Angle < fieldOfView / 2, check Linecast for obstacles — first unblocked is target detected.
For VR games add sound stimuli checking: player shoots or runs → creates SoundStimulus event with position and intensity → all agents in radius intensity * attenuationFactor notified. Can implement via Unity Events or simple Physics.OverlapSphere from sound point.
| AI complexity | Estimated timeline |
|---|---|
| Basic navigation (NavMeshAgent + chase/patrol) | 1–2 weeks |
| Behaviour Tree + enemy groups + Steering | 3–6 weeks |
| Full AI system with perception and LOD | 2–4 months |
Cost calculated after requirements analysis for enemy behavior and number of simultaneously active agents.





