Design of 3D Diegetic UI for AR VR Games
Screen-space UI in VR — pain. Canvas in Overlay or Screen Space Camera mode renders over everything, doesn't participate in stereo projection properly, causes depth conflict for users. Classic HUD with HP bar in screen corner in VR reads as sticker on headset glass — destroys immersion in seconds.
Diegetic UI solves radically: interface becomes world part. Health shown on weapon display, map on wrist device, dialogue options appear as holograms before NPC. All these — objects in world space with position, depth, physics.
Technical problems transitioning to World Space Canvas
In Unity standard path — move Canvas to World Space mode and place in scene. Sounds simple. Practice — several rakes.
Text readability on scaling. World Space Canvas with TextMeshPro needs calibration: wrong Font Size to Canvas Scale ratio makes text either blurred (rasterized too fine) or needs huge polygon budget for SDF rendering over area. In VR text reads differently due IPD — must test in real headset, not Play Mode.
Occlusion and depth fighting. UI-panel positioned in space conflicts with scene geometry. Either use separate layer with custom depth write, or go transparent shader with correctly set Render Queue. Common error — UI falls behind walls or flickers on surface junction.
Interactivity through XR Interaction Toolkit. Standard Event System doesn't work with VR controllers out of box. Need TrackedDeviceGraphicRaycaster instead of GraphicRaycaster, configure XRUIInputModule and write hover/select logic for each interactive element. In XR Interaction Toolkit 2.x solved through IXRInteractable interface and XRSimpleInteractable on UI components, but requires explicit interaction layer setup.
How 3D interface design built
First — art-direction. Diegetic UI must fit organically in game world. Sci-fi shooter, fantasy RPG and medical VR simulator require fundamentally different visual logic.
Design starts with information architecture: what player always sees, what on request, what in specific context. Determines UI positioning in space: always-visible attached to hand or weapon, contextual appear in world on object interaction.
Then — model greyout in 3D editor (Blender or Maya) with exact sizes in meters. In VR measurement units matter: 0.3 × 0.2 meter panel 0.5 meters from player — one perception; same panel 1.5 meters — completely different.
Final assets export to FBX or glTF for engine import. For interactive panels — separate components for hover states, press animation, feedback visualization.
In AR (via AR Foundation + ARKit/ARCore) logic slightly different: UI must correctly anchor to AR-planes or objects via ARAnchor, account occlusion with real world (if depth occlusion enabled), work normally under unstable tracking.
Typical design errors for Diegetic UI
- Too small font. In VR minimum comfortable size — around 0.5° angular. At 0.5 m distance about 4–5 mm symbol height in world units.
- Elements too close to camera (< 0.3 m). Causes discomfort from vergence/accommodation conflict.
- UI without physical feedback. In VR button press without tactile or visual response feels broken — even if technically works.
- No state machine for elements. Button needs explicit states: Normal, Highlighted, Pressed, Disabled — with animated transitions.
| Task | Timeline |
|---|---|
| Design one UI block (wrist interface / panel) | 3–6 work days |
| Complete Diegetic UI system for one scene | 2–4 weeks |
| AR Foundation anchor integration and occlusion | 1–2 weeks |
Cost individually after requirements and target platform analysis.





