Development of Face Verification Systems (1:1)
Face verification answers the question "is this the same person?". Unlike identification (1:N database search), verification compares two specific images and returns True/False. Main applications: identity verification during online registration, matching document photo with selfie, mobile app authentication.
Verification Algorithm
import numpy as np
from insightface.app import FaceAnalysis
class FaceVerifier:
def __init__(self, threshold: float = 0.5):
self.app = FaceAnalysis(
providers=['CUDAExecutionProvider', 'CPUExecutionProvider']
)
self.app.prepare(ctx_id=0, det_size=(640, 640))
self.threshold = threshold # adjusted to FAR/FRR requirements
def verify(self, image1: np.ndarray,
image2: np.ndarray) -> dict:
face1 = self._extract_face(image1)
face2 = self._extract_face(image2)
if face1 is None or face2 is None:
return {'verified': False, 'reason': 'face_not_detected'}
# Cosine similarity between ArcFace embeddings
similarity = self._cosine_similarity(face1.embedding, face2.embedding)
return {
'verified': similarity >= self.threshold,
'similarity': float(similarity),
'threshold': self.threshold
}
def _cosine_similarity(self, a: np.ndarray, b: np.ndarray) -> float:
return float(np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b)))
Threshold Tuning: FAR vs FRR
FAR (False Accept Rate) — probability of accepting stranger as legitimate. Critical for security. FRR (False Reject Rate) — probability of rejecting legitimate user. Affects UX.
These metrics are opposing: lowering FAR raises FRR. Threshold choice depends on application:
| Application | Priority | Typical FAR |
|---|---|---|
| Mobile authentication | UX > Security | 0.1–1% |
| Online banking, KYC | Security > UX | 0.01–0.1% |
| Border control | Maximum security | < 0.001% |
| Physical access (office) | Balance | 0.01–0.1% |
EER (Equal Error Rate) — point where FAR = FRR. For ArcFace on LFW: EER ≈ 0.17%.
Liveness Detection
Without anti-spoofing, verification is vulnerable to attacks: photo on screen, 3D mask. Required components:
Passive liveness check — analyze skin texture (LBP, FrequentNet), detect screen artifacts:
from silent_face_anti_spoofing import AntiSpoof
anti_spoof = AntiSpoof(model_path='2.7_80x80_MiniFASNetV2.pth')
def check_liveness(face_crop: np.ndarray) -> dict:
prediction = anti_spoof.predict(face_crop)
return {
'is_real': prediction['label'] == 1,
'score': prediction['probability']
}
Active liveness check — user performs random action: blink, head turn, speak digit. Sequence of frames is validated.
Document Verification (KYC)
For Know Your Customer tasks: compare document photo (passport, driver's license) with selfie.
Specifics: document photos are often low quality, may be scans with watermarks, different shooting conditions. Document photo preprocessing: detect photo zone, fix perspective, normalize brightness.
ArcFace verification accuracy matching document↔selfie: 94–97% TAR@FAR=0.1%.
Metrics and Benchmarks
- LFW (Labeled Faces in the Wild): academic standard. ArcFace: 99.83%
- IJB-B/IJB-C: harder datasets with video. ArcFace TAR@FAR=1e-4: 94.0/96.5%
- MegaFace Challenge: 1M distractors. ArcFace Rank-1: 98.35%
| Application | Timeline |
|---|---|
| Mobile app verification | 2–3 weeks |
| KYC verification with documents | 3–5 weeks |
| High-reliability verification + liveness | 4–7 weeks |







