AI Virtual Makeup Try-On Implementation

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
AI Virtual Makeup Try-On Implementation
Complex
~1-2 weeks
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

Development of AI System for Virtual Makeup Try-On

Virtual makeup try-on allows users to "try on" cosmetics in real-time through a camera or uploaded photo. Used in beauty brand apps, cosmetics marketplaces, and AR filters.

Implementation Approaches

Approach 1: Landmark-based (fast, for real-time)

  • Detection of 468 facial key points (MediaPipe Face Mesh)
  • Makeup rendering via mesh overlay
  • < 5 ms, works in browser via WebGL

Approach 2: AI-generative (high quality, offline)

  • Stable Diffusion inpainting with application area mask
  • Realistic texture and reflections
  • 5–15 sec, requires GPU

MediaPipe approach (real-time)

import mediapipe as mp
import cv2
import numpy as np
from PIL import Image

class RealTimeMakeupAR:
    def __init__(self):
        self.face_mesh = mp.solutions.face_mesh.FaceMesh(
            static_image_mode=False,
            max_num_faces=1,
            min_detection_confidence=0.5,
            min_tracking_confidence=0.5
        )

    # MediaPipe point indices for facial parts
    LIPS_INDICES = [61, 185, 40, 39, 37, 0, 267, 269, 270, 409, 291, 375, 321, 405, 314, 17, 84, 181, 91, 146]
    UPPER_LID_L = [362, 382, 381, 380, 374, 373, 390, 249, 263, 466, 388, 387, 386, 385, 384, 398]
    CHEEKS_L = [36, 31, 228, 229, 230, 231, 232, 233, 244, 245, 188, 174, 177, 215, 213, 192]

    def apply_lipstick(self, frame, color, opacity=0.6):
        rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
        results = self.face_mesh.process(rgb)
        if not results.multi_face_landmarks:
            return frame
        landmarks = results.multi_face_landmarks[0]
        h, w = frame.shape[:2]
        lip_points = np.array([[int(landmarks.landmark[i].x * w), int(landmarks.landmark[i].y * h)] for i in self.LIPS_INDICES], dtype=np.int32)
        overlay = frame.copy()
        cv2.fillPoly(overlay, [lip_points], color[::-1])
        result = cv2.addWeighted(frame, 1 - opacity, overlay, opacity, 0)
        return result

Timeline: browser-based real-time AR makeup (MediaPipe) — 3–4 weeks. AI photo try-on with API — 1–2 weeks. Full mobile app with catalog — 3–4 months.