Setting up AI model response quality monitoring
Monitoring the quality of LLM responses in production—detection of degradation before users start complaining. Automatic metrics: response length, toxicity, relevance, hallucination signals.
Quality metrics for monitoring
Proxy metrics (automatically calculated, without LLM):
- Response length: sharp decrease → regression in prompt
- Refusal rate: increase in refusal to respond
- Incomplete response rate: responses truncated by max_tokens
- Repetition rate: loops in generation
LLM-as-judge metrics (more expensive, optional):
- Relevance to query
- Factual consistency
- Helpfulness score
- Toxicity / safety
class ResponseQualityMonitor:
def analyze_response(self, request: str, response: str) -> QualitySignals:
signals = QualitySignals()
# Длина
signals.response_length_tokens = count_tokens(response)
signals.is_very_short = signals.response_length_tokens < 20
# Отказ отвечать
refusal_patterns = ["I cannot", "I'm unable", "I don't have access",
"не могу", "не в состоянии", "отказываюсь"]
signals.is_refusal = any(p.lower() in response.lower() for p in refusal_patterns)
# Обрезание
signals.is_truncated = response.endswith(("...", "—", "–"))
# Повторения (петли)
words = response.split()
if len(words) > 20:
unique_bigrams = len(set(zip(words, words[1:])))
total_bigrams = len(words) - 1
signals.repetition_score = 1 - unique_bigrams / total_bigrams
signals.has_loops = signals.repetition_score > 0.4
return signals
Quality drift over time
Problem: LLM API providers update models without notice. Yesterday's gpt-4o isn't today's. We're tracking a moving average of quality metrics:
def detect_quality_drift(
metric: str,
recent_values: list[float], # последние N запросов
baseline_values: list[float] # исторический baseline
) -> DriftDetection:
from scipy.stats import ks_2samp
# Kolmogorov-Smirnov тест на изменение распределения
statistic, p_value = ks_2samp(baseline_values, recent_values)
# Среднее значение
recent_mean = np.mean(recent_values)
baseline_mean = np.mean(baseline_values)
relative_change = (recent_mean - baseline_mean) / baseline_mean
return DriftDetection(
metric=metric,
is_drifted=p_value < 0.05,
relative_change=relative_change,
direction="improvement" if relative_change > 0 else "degradation",
severity="high" if abs(relative_change) > 0.10 else "medium" if abs(relative_change) > 0.05 else "low"
)
Alerts and dashboards
Prometheus metrics are published from monitoring:
from prometheus_client import Histogram, Counter, Gauge
RESPONSE_LENGTH = Histogram("llm_response_length_tokens", "Response length distribution",
buckets=[10, 50, 100, 200, 500, 1000, 2000])
REFUSAL_COUNT = Counter("llm_refusal_total", "Refusal responses")
QUALITY_SCORE = Gauge("llm_quality_score", "Rolling quality score", ["model"])
# Алерты
# ALERT если refusal_rate за 15 минут > 5%
# ALERT если средняя длина ответа < 50 токенов (была > 150)
# ALERT если quality_score < baseline - 0.1







