Development of AI System for Virtual Makeup Try-On
Virtual makeup try-on allows users to "try on" cosmetics in real-time through a camera or uploaded photo. Used in beauty brand apps, cosmetics marketplaces, and AR filters.
Implementation Approaches
Approach 1: Landmark-based (fast, for real-time)
- Detection of 468 facial key points (MediaPipe Face Mesh)
- Makeup rendering via mesh overlay
- < 5 ms, works in browser via WebGL
Approach 2: AI-generative (high quality, offline)
- Stable Diffusion inpainting with application area mask
- Realistic texture and reflections
- 5–15 sec, requires GPU
MediaPipe approach (real-time)
import mediapipe as mp
import cv2
import numpy as np
from PIL import Image
class RealTimeMakeupAR:
def __init__(self):
self.face_mesh = mp.solutions.face_mesh.FaceMesh(
static_image_mode=False,
max_num_faces=1,
min_detection_confidence=0.5,
min_tracking_confidence=0.5
)
# MediaPipe point indices for facial parts
LIPS_INDICES = [61, 185, 40, 39, 37, 0, 267, 269, 270, 409, 291, 375, 321, 405, 314, 17, 84, 181, 91, 146]
UPPER_LID_L = [362, 382, 381, 380, 374, 373, 390, 249, 263, 466, 388, 387, 386, 385, 384, 398]
CHEEKS_L = [36, 31, 228, 229, 230, 231, 232, 233, 244, 245, 188, 174, 177, 215, 213, 192]
def apply_lipstick(self, frame, color, opacity=0.6):
rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
results = self.face_mesh.process(rgb)
if not results.multi_face_landmarks:
return frame
landmarks = results.multi_face_landmarks[0]
h, w = frame.shape[:2]
lip_points = np.array([[int(landmarks.landmark[i].x * w), int(landmarks.landmark[i].y * h)] for i in self.LIPS_INDICES], dtype=np.int32)
overlay = frame.copy()
cv2.fillPoly(overlay, [lip_points], color[::-1])
result = cv2.addWeighted(frame, 1 - opacity, overlay, opacity, 0)
return result
Timeline: browser-based real-time AR makeup (MediaPipe) — 3–4 weeks. AI photo try-on with API — 1–2 weeks. Full mobile app with catalog — 3–4 months.







