Virtual Object Occlusion by Real-World Objects in AR

Our video game development company runs independent projects, jointly creates games with the client and provides additional operational services. Expertise of our team allows us to cover all gaming platforms and develop an amazing product that matches the customer’s vision and players preferences.
Showing 1 of 1 servicesAll 242 services
Virtual Object Occlusion by Real-World Objects in AR
Complex
~1-2 weeks
FAQ
Our competencies
What are the stages of Game Development?
Latest works
  • image_games_mortal_motors_495_0.webp
    Game development for Mortal Motors
    663
  • image_games_a_turnbased_strategy_game_set_in_a_fantasy_setting_with_fire_and_sword_603_0.webp
    A turn-based strategy game set in a fantasy setting, With Fire and Sword
    859
  • image_games_second_team_604_0.webp
    Game development for the company Second term
    490
  • image_games_phoenix_ii_606_0.webp
    3D animation - teaser for the game Phoenix 2.
    533

Implementing occlusion of virtual objects by real objects in AR

Without occlusion AR looks like sticker over reality. Virtual character stands behind real chair – but chair doesn't cover character, illusion crumbles. User instantly reads this as "image overlay," not object presence in space.

AR Foundation 5.x provides AROcclusionManager – component using depth map from sensor (LiDAR on iPhone Pro / iPad Pro, depth estimation on devices without LiDAR) to mask virtual objects behind real ones. But "just add component" doesn't work.

Where occlusion breaks

On devices with LiDAR, depth map (environmentDepthTexture) updates realtime about 30 fps – honest depth buffer of real world. On Android and iPhone without LiDAR, AR Foundation uses ML-estimated depth via AROcclusionManager.requestedEnvironmentDepthMode = EnvironmentDepthMode.Best, giving significantly more edge artifacts.

Main problem – gap between real world depth and Unity Z-buffer. Virtual objects render in standard pipeline with normal depth testing, "real world" arrives as 2D-texture. Must explicitly implement occlusion: render real surfaces into depth via DepthShader, record in depth buffer before virtual object render.

Concretely in URP: create ScriptableRendererFeature with two RenderPass. First pass takes environmentDepthTexture from AROcclusionManager, converts to depth buffer (accounting for difference in near/far clip planes between AR-camera and Unity-camera) and records in _CameraDepthTexture. Second pass – standard scene render. Virtual objects automatically get correct depth test against real world.

Sounds simple, but nuance: depth conversion. AR Foundation returns linear depth in meters, Unity depth buffer – nonlinear (logarithmic or reversed-Z depending on settings). Wrong conversion gives "flickering" at occlusion borders or complete effect absence.

Soft edges and alpha blending

Hard occlusion – object either visible or not – looks rough. Real object edges depth map always inaccurate: blur, ML-estimation artifacts. Virtual object cuts sharp edge, jagged border results.

Correct solution – soft occlusion via depth map blur before depth testing. AROcclusionManager in AR Foundation 5.x supports OcclusionPreferenceMode.PreferEnvironmentOcclusion and PreferSmoothOcclusion – second mode applies bilateral filter for edge smoothing. But in URP must connect manually via custom render pass; automatically works only in Built-in RP.

In practice we do: in virtual object shader add occlusion stage with smoothstep comparing real world depth and object depth. Blend range – 2–5 cm in world coordinates. Gives smooth transition at edges without obvious artifacts.

Case: character behind furniture

Project – AR-game with mobile AR-character interacting with real room. Task: character "hides" behind real furniture.

Without LiDAR (main audience – mid-range Android), had use ML-depth estimation. Problem: ML bad on uniform surfaces (white wall, smooth table) – depth there unstable, character "fell" through table or came forward chaotically.

Solution – hybrid approach. For large detected planes (ARPlane) use exact geometry from ARPlaneManager (we know position in space), render as 3D-meshes into depth buffer. ML-depth use only for non-plane objects – chairs, people, objects on table. Removed 80% artifacts on typical interior scenes.

Timeline and process

Estimation starts with target device audit and current render-pipeline. URP and HDRP require different approaches. HDRP in mobile AR practically unused – too heavy, but for Magic Leap or HoloLens own specifics.

Scenario Timeline
Basic occlusion via AROcclusionManager (URP) 3–7 days
Soft occlusion with custom render pass 1–2 weeks
Hybrid approach (planes + ML depth) 2–4 weeks
HoloLens / Magic Leap (separate pipeline) from 3 weeks

Cost – after project analysis and target platforms. Key questions: which RP, LiDAR in target audience, Android below ARCore 1.24 support needed.