AR Foundation Interaction Mechanics Development

Our video game development company runs independent projects, jointly creates games with the client and provides additional operational services. Expertise of our team allows us to cover all gaming platforms and develop an amazing product that matches the customer’s vision and players preferences.
Showing 1 of 1 servicesAll 242 services
AR Foundation Interaction Mechanics Development
Complex
~1-2 weeks
FAQ
Our competencies
What are the stages of Game Development?
Latest works
  • image_games_mortal_motors_495_0.webp
    Game development for Mortal Motors
    663
  • image_games_a_turnbased_strategy_game_set_in_a_fantasy_setting_with_fire_and_sword_603_0.webp
    A turn-based strategy game set in a fantasy setting, With Fire and Sword
    859
  • image_games_second_team_604_0.webp
    Game development for the company Second term
    490
  • image_games_phoenix_ii_606_0.webp
    3D animation - teaser for the game Phoenix 2.
    533

Developing augmented reality interaction mechanics (AR Foundation)

AR Foundation – abstraction over ARKit (iOS) and ARCore (Android), allowing single code for both platforms. But abstraction incomplete: under it lives different physics, different plane tracking quality, different depth map speed. This means interaction mechanics must design accounting for both platforms, test on real devices – not editor.

Placement: placing objects in real space

Basic mechanic of any AR-app – place virtual object on real surface. AR Foundation does this via Raycast against AR Planes: ARRaycastManager.Raycast() casts ray from screen tap point, returns ARRaycastHit list with position and plane normal.

Standard mistake: calling Raycast every frame in Update without debounce. Works, but on complex scenes ARCore spends 2–4 ms per raycast – eats budget. Right: raycast only on finger position change, with minimum movement threshold 5–10 pixels.

Another problem – plane not yet detected. ARCore and ARKit detect horizontal planes for 2–5 seconds on well-textured surface. On monotone white table – never. Need feedback UI: visual search indicator, instruction "move camera over surface."

ARAnchor: why Transform.position alone insufficient

If place object in AR and simply remember its worldPosition – moving with device, object "floats" relative to real world. ARKit and ARCore periodically recalculate world coordinates improving tracking. Object without anchor drifts.

ARAnchor – point AR system obligates to track and auto-correct. Object must be child of ARAnchor, not simply placed in world coordinates.

In AR Foundation: ARAnchorManager.TryAddAnchorAsync(pose) returns ARAnchor to attach content to. Reopening scene (Persistent AR) – ARCore Cloud Anchors or ARKit WorldMap for saving anchors between sessions.

Occlusion: real world hides virtual object

Without occlusion, virtual object renders on top everything: vase on table displays through your hand. Immediately destroys illusion.

AROcclusionManager in AR Foundation provides depth texture from sensor (LiDAR on iPhone Pro, algorithmic depth estimation via camera on Android). URP shader must check depth texture before fragment write – this Environment Depth Occlusion.

Setup: AROcclusionManager.environmentDepthMode = EnvironmentDepthMode.Best (max quality), occlusionPreferenceMode = OcclusionPreferenceMode.PreferEnvironmentOcclusion. On Android without LiDAR depth estimated – occlusion works roughly, artifacts at edges. On iPhone Pro with LiDAR – precise.

Standard URP Lit without custom shader doesn't use AR depth texture. Need either ShaderGraph with AROcclusion node from AR Foundation Shader Framework, or custom HLSL with manual depth comparison.

Interaction: tap, drag, scale

AR has no physical controllers – interaction via touchscreen. Standard set for AR objects:

Tap to select: ARRaycastManager + Physics.Raycast against object collider. On hit – highlight via material change or outline.

Drag (translate): on finger movement – new AR Raycast returns new plane position, object Lerp-s to it. Without Lerp object jumps. speed = 15f in MoveTowards gives smooth following without sticking.

Pinch to scale: Touch[0] and Touch[1] – two fingers. Current distance between vs. previous = scale delta. transform.localScale *= scaleDelta with clamping to Vector3.one * minScale and maxScale.

Rotation: one finger, drag horizontally = rotation around world-up axis. Two fingers, twist = rotation. For AR recommend limiting rotation only around Y-axis – X or Z rotation breaks "object on surface" feel.

Timeline: basic placement + interaction (tap/drag/scale) – 3–5 business days; full AR experience with Anchors, Occlusion, Persistent Cloud Anchors – 2–4 weeks. Cost calculated individually after requirements analysis.