Setting up facial rig and character expressions for games

Our video game development company runs independent projects, jointly creates games with the client and provides additional operational services. Expertise of our team allows us to cover all gaming platforms and develop an amazing product that matches the customer’s vision and players preferences.
Showing 1 of 1 servicesAll 242 services
Setting up facial rig and character expressions for games
Complex
~5 business days
FAQ
Our competencies
What are the stages of Game Development?
Latest works
  • image_games_mortal_motors_495_0.webp
    Game development for Mortal Motors
    663
  • image_games_a_turnbased_strategy_game_set_in_a_fantasy_setting_with_fire_and_sword_603_0.webp
    A turn-based strategy game set in a fantasy setting, With Fire and Sword
    859
  • image_games_second_team_604_0.webp
    Game development for the company Second term
    490
  • image_games_phoenix_ii_606_0.webp
    3D animation - teaser for the game Phoenix 2.
    533

Setting Up Facial Rig and Character Expressions for Games

Face is where player reads emotion. Imprecise facial expression destroys immersion faster than any other technical flaw. Yet facial rig is one of most complex pipeline tasks: skeletal animation, blend shapes, procedural Constraints and engine performance requirements intersect.

Two Approaches: Bones vs Blend Shapes

In game context choice between bones and morph targets (Blend Shapes / Shape Keys) determined by engine and platform.

Bones (Joint-based rig)—work in any engine through standard skinned mesh. Support any Avatar type, integrate in Animator Controller, managed through Animation Rigging. Downside: on fine deformations around eyes and lips bones give angularity—need many bones for organic movement.

Blend Shapes (Morph Targets)—each face pose stored as separate mesh offset. Deformation linear, easily predictable, perfect for facial expressions and lip sync. Unity supports Blend Shapes through SkinnedMeshRenderer.SetBlendShapeWeight(index, weight). Limitation: Linear blend between extreme poses can give "melting" effect with complex combinations (smile + squint + raised brow).

Correct answer for AAA characters: combination of both. Bones for eyes, brows, jaw (precise rotations with clear control). Blend Shapes for complex deformations—cheek, nasolabial folds, smile wrinkles.

Facial Rig Structure on Bones

Minimal set for expressive character:

  • Jaw—lower jaw, 1 bone with Limit Rotation. Downward rotation on X: 0–20° for speech, up to 30° for scream. Also controls small lower lip displacement through Constraint
  • UpperLid_L/R, LowerLid_L/R—eyelids. 4 bones with limited range. Copy Rotation with minimal Eye-bone contribution (eye movement slightly follows lid—Corneal Bulge effect)
  • Eye_L/R—eyeballs. Track To common Eye_Target—animator moves this instead of separate eyes. Limit: ±30° on X and Y
  • Brow_Inner_L/R, Brow_Mid_L/R, Brow_Outer_L/R—eyebrows. 6 bones for shape control. FK control through control bones
  • LipCorner_L/R—mouth corners. Key points for smile/grimace
  • UpperLip_C, LowerLip_C—lip centers for kiss and surprise

Total: 16–20 bones for basic facial rig. For dialogue-heavy game with detailed lip sync—30–40.

Blend Shapes for Expressions: Naming and ARKit

ARKit standard is de facto standard for Blend Shape naming for face tracking. 52 named morphs: eyeBlinkLeft, jawOpen, mouthSmileLeft, browInnerUp etc. If planning live face tracking or importing ready animations—ARKit-standard naming mandatory, otherwise mapping requires manual work.

VRM-format for avatars uses different set: A, I, U, E, O for vowels, Joy, Angry, Sorrow, Fun, Blink, BlinkLeft, BlinkRight. If project aimed at VTuber platforms or VRChat—VRM-naming standard.

For Unity with Animator: Blend Shape weights updated through C# script or Animation Clip. Animated Blend Shapes in FBX—standard path. In Blender FBX export: enable Shape Keys, ensure Relative Shape Keys not Absolute.

Lip Sync: Tools and Integration

For automatic lip sync two main options:

Oculus OVR LipSync SDK—phoneme system, works on audio thread in real-time. Outputs weights for 15 viseme, mapping to Blend Shapes through OVRLipSyncContext component. Supports Unity.

SALSA LipSync Suite (Asset Store)—broader mapping, timeline support for cutscenes, offline audio processing capability. Works with both Blend Shapes and Joint rig.

Manual lip sync by keyframes—for cutscenes and important dialogue scenes where automation doesn't achieve needed quality. Process: Waveform in Timeline, Blend Shape Weight keyframes by phonemes at needed positions.

Face Skinning Weights

Face skin weighted separately from body—with exceptionally fine weights and many control points. Special zones:

Eye Corners. Deform with lid, not follow Eye-bone. Eye-bone weight—0, UpperLid = 0.5, LowerLid = 0.5.

Nasolabial Fold. On smile through LipCorner-bone fold should deepen—better through Blend Shape not bones. If no Blend Shapes—separate Cheek-bone with Stretch To.

Forehead. Usually skinned completely to Head-bone without secondary influences, deformed only through Blend Shapes or Brow-bones.

Timeline

Task Estimated Duration
Basic facial rig on bones (16–20 bones) 1 to 2 days
Blend Shapes set (30–52 morphs) 2 to 4 days
Combined rig bones + Blend Shapes 3 to 5 days
Lip Sync SDK integration to Unity 1 to 2 days

Facial rig most depends on quality of source face geometry—topology with edge loops around mouth and eyes simplifies work. Cost calculated after mesh assessment and expression requirements.