tags: [vr-ar]
Testing AR tracking stability under various lighting conditions
AR tracking breaks not from bad code — breaks from bad light. This is one problem impossible to reproduce on developer desk in comfortable office. Sunlight through window at 15-degree angle, fluorescent lamps flickering at 100 Hz, dark room with one spotlight — each scenario behaves differently, and ARCore responds differently from ARKit.
Why tracking loses anchors where you don't expect
Visual inertial odometry algorithms that ARCore and ARKit build on use feature points — characteristic points in frame — to calculate camera position in space. At illumination below ~50 lux number of reliable feature points drops sharply, system starts compensating data loss with IMU. Normal while IMU doesn't accumulate drift. Practice — 3–4 seconds in poor-lit zone, virtual object "swam" 5–8 centimeters. In gaming context — disaster: player looks at table, expects figure where put, it hovers in air.
Separate story — overexposure. Direct sunlight or powerful spotlight creates saturated pixel zones where feature extraction doesn't work at all. ARKit handles this slightly better via ARCamera.TrackingState with reason .insufficientFeatures, which at least allows warning user. ARCore via TrackingFailureReason.INSUFFICIENT_LIGHT and INSUFFICIENT_FEATURES — similarly, but thresholds differ.
Fluorescent lamps — separate pain class. At 50/60 Hz frequencies they create flicker that camera sensor records as regular exposure changes. Visually almost unnoticeable, but tracking algorithm sees feature points "breathing" between frames, interprets as movement.
How testing methodology builds
Standard test scenario matrix for AR tracking includes several axes:
By illumination level: dark room (~10–30 lux), office artificial light (~300–500 lux), overcast day street (~1000–5000 lux), direct sunlight (>50,000 lux).
By source type: point (LED bulb), linear (fluorescent/LED panel), diffuse (overcast sky), mixed (window + ceiling light).
By dynamics: static lighting, passing shadows, day/night change via shutter, blinking source.
For each scenario record: time to tracking loss, maximum ARAnchor drift over 60 seconds, number of .limited events in ARCamera.TrackingState, behavior on return to normal conditions (recovery time).
Instrumenting: custom overlay in Unity AR Foundation with ARSession.state, ARCamera.currentTrackingMode, numberOfTrackedImages real-time display on test device. Plus recording via ReplayKit (iOS) or MediaProjection (Android) for later analysis.
What we do with results
If tracking unstable in specific illumination range — not just bug in report, it's UX design task. Several standard solutions:
Enabling ARWorldTrackingConfiguration.environmentTexturing helps ARKit better understand environment, but eats memory. On iPhone 12+ it's compromise.
For poor-light scenarios — forced transition to plane detection with ARPlaneDetectionMode and anchors tied to planes, not feature points. Much more stable, but requires flat surfaces in frame.
For Android — setting Config.FocusMode.FIXED instead of AUTO in ArSession reduces "blurry" frames with fast movement in low light.
| Lighting condition | Expected tracking behavior | Typical loss time |
|---|---|---|
| < 50 lux | Frequent .limited (insufficientFeatures) |
5–10 sec |
| 50–300 lux | Unstable, depends on surface texture | On camera movement |
| 300–5000 lux | Working range | Loss on overexposure |
| > 20,000 lux (direct sun) | Saturated frame, total loss | Immediate |
Work process
Collect requirements: target platforms (iOS/Android, OS versions, specific models), typical use conditions (indoor/outdoor), acceptable anchor drift in millimeters.
Prepare test environment: dimmers, blinds, set of targets with different surface texture. Testing image tracking — separate matrix by marker contrast.
Run scenario matrix, record metrics, prepare report with specific thresholds and recommendations. If needed — fix AR session configuration or add critical state UX handling.
Timeline — 2–3 days for one platform with basic scenarios to 2–3 weeks for full both-platform coverage with iterative fixes.
Cost calculated individually after analyzing requirements and test matrix composition.





