AI System for Product Recognition on Store Shelves
The task of recognizing products on retail shelves is a combination of detection (where is the product) and identification (what product, which specific SKU). Complexity in scale: a large retailer has 10,000–50,000 unique SKUs, and packaging changes regularly.
Product Detection: Fine-tuning on Shelf Photos
from ultralytics import YOLO
import yaml
from pathlib import Path
def prepare_retail_dataset_config(
data_dir: str,
class_names: list[str]
) -> str:
"""
Dataset config for YOLOv8.
For retail shelves we recommend imgsz=1280 — packaging details matter.
"""
config = {
'path': data_dir,
'train': 'images/train',
'val': 'images/val',
'test': 'images/test',
'nc': len(class_names),
'names': class_names
}
config_path = Path(data_dir) / 'dataset.yaml'
with open(config_path, 'w') as f:
yaml.dump(config, f, allow_unicode=True)
return str(config_path)
# Train product detector
model = YOLO('yolov8l.pt')
model.train(
data='retail_dataset.yaml',
imgsz=1280, # important: small price tags and text require resolution
batch=8, # lower batch at 1280
epochs=200,
device='0',
augment=True,
mosaic=0.5, # reduce mosaic — don't want to change product scale
copy_paste=0.3, # useful for retail
rect=False # rectangular batches impair small object detection
)
SKU Identification: Embedding + kNN
With 10,000+ SKUs, softmax classifier doesn't scale: adding a new product requires retraining the entire model. Embedding approach (metric learning) solves this: new SKU = add its embedding to the index without retraining.
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
import timm
import faiss
import numpy as np
class SKUEmbeddingModel(nn.Module):
"""
ArcFace-like metric learning for product identification.
Train on product crops → 512-dim embedding.
"""
def __init__(self, num_skus: int, embedding_dim: int = 512):
super().__init__()
self.backbone = timm.create_model(
'efficientnet_b4',
pretrained=True,
num_classes=0
)
self.embedding = nn.Sequential(
nn.Linear(self.backbone.num_features, embedding_dim),
nn.BatchNorm1d(embedding_dim)
)
# ArcFace head for training
self.arcface = ArcFaceHead(embedding_dim, num_skus)
def forward(self, x: torch.Tensor, labels: torch.Tensor = None):
feat = self.backbone(x)
emb = F.normalize(self.embedding(feat), dim=1)
if labels is not None:
return self.arcface(emb, labels)
return emb
class ArcFaceHead(nn.Module):
def __init__(self, dim: int, num_classes: int,
margin: float = 0.3, scale: float = 32.0):
super().__init__()
self.weight = nn.Parameter(torch.randn(num_classes, dim))
self.margin = margin
self.scale = scale
def forward(self, emb: torch.Tensor, labels: torch.Tensor):
import math
W = F.normalize(self.weight, dim=1)
cosine = F.linear(emb, W)
# Apply margin only to correct class
one_hot = torch.zeros_like(cosine)
one_hot.scatter_(1, labels.unsqueeze(1), 1)
phi = cosine - self.margin
output = (one_hot * phi + (1 - one_hot) * cosine) * self.scale
return F.cross_entropy(output, labels)
class SKUFAISSIndex:
"""FAISS index for fast SKU similarity search"""
def __init__(self, embedding_dim: int = 512):
self.index = faiss.IndexFlatIP(embedding_dim) # inner product = cosine when normalized
self.sku_ids = []
def add_sku(self, sku_id: str, embedding: np.ndarray) -> None:
emb_norm = embedding / (np.linalg.norm(embedding) + 1e-8)
self.index.add(emb_norm.reshape(1, -1).astype(np.float32))
self.sku_ids.append(sku_id)
def search(
self, query_embedding: np.ndarray, top_k: int = 5
) -> list[dict]:
q = (query_embedding / (np.linalg.norm(query_embedding) + 1e-8)
).reshape(1, -1).astype(np.float32)
scores, indices = self.index.search(q, top_k)
return [
{'sku_id': self.sku_ids[idx], 'score': float(scores[0][i])}
for i, idx in enumerate(indices[0])
if idx < len(self.sku_ids)
]
Handling Package Changes
The main operational challenge in retail is packaging refresh. Brands change design annually, and the model starts making mistakes on new packages.
Solution: online index update. For the embedding approach, it's enough to photograph the new packaging and add the embedding to the FAISS index. The old embedding can be deleted or kept (model automatically prefers closer one).
def update_sku_appearance(
sku_index: SKUFAISSIndex,
model: SKUEmbeddingModel,
sku_id: str,
new_product_images: list,
keep_old: bool = False # False = replace, True = add variant
) -> None:
model.eval()
embeddings = []
with torch.no_grad():
for img in new_product_images:
emb = model(img.unsqueeze(0).cuda()).cpu().numpy()
embeddings.append(emb.squeeze())
# Average over multiple angles
mean_emb = np.mean(embeddings, axis=0)
if not keep_old:
# Remove old entries (requires IndexIDMap for FAISS)
pass
sku_index.add_sku(sku_id, mean_emb)
print(f'Updated SKU {sku_id} with {len(new_product_images)} images')
Accuracy on Real Retail Data
| SKU Base | Method | Top-1 Accuracy | Top-5 Accuracy | Update Time |
|---|---|---|---|---|
| 1,000 SKU | Softmax | 91.4% | 98.2% | Retraining (days) |
| 1,000 SKU | CLIP zero-shot | 78.3% | 91.7% | Instant |
| 1,000 SKU | ArcFace + FAISS | 95.8% | 99.1% | Seconds |
| 10,000 SKU | ArcFace + FAISS | 92.3% | 97.8% | Seconds |
| 50,000 SKU | ArcFace + FAISS | 87.1% | 95.4% | Seconds |
Timeline
| Task | Timeline |
|---|---|
| Detector + identifier for pilot (500 SKU) | 4–6 weeks |
| Industrial system (10,000+ SKU) | 8–14 weeks |
| Integration with SAP/1C + mobile app | 12–20 weeks |







