Active Learning Implementation for Labeling Optimization

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
Active Learning Implementation for Labeling Optimization
Medium
~1-2 weeks
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

Implementing Active Learning to Optimize Labeling

Data labeling is the most expensive part of an ML project. Active learning allows the model to independently choose which examples to label next, focusing on the most uncertain or informative ones. The result: a 5-10x reduction in labeling costs with the same accuracy.

Example selection strategies

Uncertainty Sampling - a classic:

import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.base import BaseEstimator

class UncertaintySampler:
    def __init__(self, model: BaseEstimator, strategy='entropy'):
        self.model = model
        self.strategy = strategy

    def query(self, X_unlabeled: np.ndarray, n_instances: int = 10) -> np.ndarray:
        """
        Выбираем n наиболее неопределённых примеров для разметки.
        """
        proba = self.model.predict_proba(X_unlabeled)

        if self.strategy == 'entropy':
            # Максимальная энтропия — модель максимально неуверена
            scores = -np.sum(proba * np.log(proba + 1e-10), axis=1)

        elif self.strategy == 'margin':
            # Минимальный margin между топ-2 классами
            sorted_proba = np.sort(proba, axis=1)
            scores = 1 - (sorted_proba[:, -1] - sorted_proba[:, -2])

        elif self.strategy == 'least_confident':
            # Максимальная вероятность наиболее вероятного класса = минимальная уверенность
            scores = 1 - proba.max(axis=1)

        # Индексы n наиболее неопределённых примеров
        return np.argsort(scores)[-n_instances:]

Query by Committee — ensemble disagreement:

from sklearn.base import clone

class CommitteeSampler:
    def __init__(self, base_estimator, n_members=5):
        self.committee = [clone(base_estimator) for _ in range(n_members)]

    def fit_committee(self, X_labeled: np.ndarray, y_labeled: np.ndarray):
        """
        Каждый член комитета обучается на bootstrap-выборке
        """
        n = len(X_labeled)
        for member in self.committee:
            bootstrap_idx = np.random.choice(n, n, replace=True)
            member.fit(X_labeled[bootstrap_idx], y_labeled[bootstrap_idx])

    def query(self, X_unlabeled: np.ndarray, n_instances: int = 10) -> np.ndarray:
        """
        Несогласие = Vote Entropy: чем больше членов расходятся — тем ценнее пример
        """
        predictions = np.array([
            member.predict(X_unlabeled) for member in self.committee
        ])  # (n_members, n_samples)

        vote_entropy = []
        for sample_idx in range(X_unlabeled.shape[0]):
            votes = predictions[:, sample_idx]
            unique, counts = np.unique(votes, return_counts=True)
            probs = counts / len(votes)
            entropy = -np.sum(probs * np.log(probs + 1e-10))
            vote_entropy.append(entropy)

        return np.argsort(vote_entropy)[-n_instances:]

Core-Set Sampling for variety

Geometric coverage of the feature space:

from sklearn.metrics import pairwise_distances

def core_set_selection(X_labeled: np.ndarray,
                        X_unlabeled: np.ndarray,
                        n_instances: int) -> np.ndarray:
    """
    Core-Set: выбираем точки, максимально удалённые от уже размеченных.
    Обеспечивает разнообразие — не выбираем похожие неопределённые примеры.
    """
    selected_indices = []
    labeled_pool = X_labeled.copy()

    for _ in range(n_instances):
        # Расстояния от каждой несмеченной точки до ближайшей помеченной
        distances = pairwise_distances(X_unlabeled, labeled_pool)
        min_distances = distances.min(axis=1)

        # Выбираем точку с максимальным расстоянием до ближайшей помеченной
        best_idx = np.argmax(min_distances)
        selected_indices.append(best_idx)

        # Добавляем выбранную точку в помеченный пул
        labeled_pool = np.vstack([labeled_pool, X_unlabeled[best_idx]])

    return np.array(selected_indices)

Active Learning for NLP (Sequence Labeling)

Token-level uncertainty for NER:

import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer

def ner_uncertainty_sampling(texts: list,
                               model, tokenizer,
                               n_instances: int = 20) -> list:
    """
    Для NER: неопределённость на уровне токена.
    Aggregation: mean entropy по всем токенам предложения.
    """
    sentence_uncertainties = []

    for i, text in enumerate(texts):
        inputs = tokenizer(text, return_tensors='pt', truncation=True, max_length=512)

        with torch.no_grad():
            outputs = model(**inputs)

        # Softmax вероятности для каждого токена
        probs = torch.softmax(outputs.logits, dim=-1).squeeze()  # (seq_len, n_labels)

        # Энтропия каждого токена
        token_entropy = -(probs * torch.log(probs + 1e-10)).sum(dim=-1)

        # Агрегация: максимальная неопределённость токена в предложении
        sentence_uncertainty = token_entropy.max().item()
        sentence_uncertainties.append((i, sentence_uncertainty))

    # Топ-N наиболее неопределённых предложений
    sentence_uncertainties.sort(key=lambda x: x[1], reverse=True)
    return [idx for idx, _ in sentence_uncertainties[:n_instances]]

Active Learning Loop

Full cycle with integration into the markup tool:

class ActiveLearningPipeline:
    def __init__(self, model, sampler, labeling_budget: int):
        self.model = model
        self.sampler = sampler
        self.budget = labeling_budget
        self.labeled_count = 0
        self.performance_history = []

    def run(self, X_initial: np.ndarray, y_initial: np.ndarray,
             X_pool: np.ndarray, batch_size: int = 20):
        """
        Цикл:
        1. Обучить на размеченных данных
        2. Выбрать наиболее информативные из пула
        3. Отправить на разметку
        4. Добавить к размеченным
        5. Повторить
        """
        X_labeled, y_labeled = X_initial.copy(), y_initial.copy()
        X_unlabeled = X_pool.copy()

        while self.labeled_count < self.budget and len(X_unlabeled) > 0:
            # Обучаем
            self.model.fit(X_labeled, y_labeled)

            # Оцениваем прогресс
            current_metric = self.evaluate(X_labeled, y_labeled)
            self.performance_history.append({
                'n_labeled': len(X_labeled),
                'metric': current_metric
            })

            # Выбираем следующий батч
            query_idx = self.sampler.query(X_unlabeled, n_instances=batch_size)

            # Симуляция разметки (в реальности — интерфейс аннотатора)
            new_y = get_labels_from_annotator(X_unlabeled[query_idx])

            X_labeled = np.vstack([X_labeled, X_unlabeled[query_idx]])
            y_labeled = np.concatenate([y_labeled, new_y])
            X_unlabeled = np.delete(X_unlabeled, query_idx, axis=0)
            self.labeled_count += batch_size

        return self.performance_history

Typical result: for a text classification task with a database of 50,000 examples, Active Learning achieves 90% of the quality of Random Sampling while using 15-20% of the labeling volume. Integration with labeling platforms: Label Studio, Prodigy, Scale AI.

Timeframe: Uncertainty sampling + basic AL loop + Label Studio integration — 2-3 weeks. Committee sampling, Core-Set, NLP/NER active learning, cold start strategy — 6-8 weeks.