Fine-Tuned Model Quality Evaluation (Benchmarks, BLEU, ROUGE, Perplexity)

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
Fine-Tuned Model Quality Evaluation (Benchmarks, BLEU, ROUGE, Perplexity)
Medium
from 1 business day to 3 business days
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

Evaluating fine-tuned model quality (benchmarks, BLEU, ROUGE, perplexity)

Quality evaluation is a mandatory step after each fine-tuning iteration. Without a structured metric system, it's impossible to understand if the model improved after fine-tuning, where exactly it makes mistakes, and when to stop training. Proper evaluation saves time on unnecessary iterations and prevents deploying a degraded model.

Metric evaluation hierarchy

Level 1: Automatic metrics Fast, cheap, computed without human involvement. Provide rough estimates.

Level 2: LLM-as-judge Strong model (GPT-4o, Claude 3.5 Sonnet) evaluates answers of the tested model. Correlates well with human judgment with proper prompt.

Level 3: Human evaluation Gold standard, but expensive. Use for final validation and calibrating lower levels.

Metrics for text generation tasks

BLEU (Bilingual Evaluation Understudy):

from nltk.translate.bleu_score import corpus_bleu, SmoothingFunction

references = [[ref.split()] for ref in reference_list]
hypotheses = [hyp.split() for hyp in hypothesis_list]

bleu_4 = corpus_bleu(
    references, hypotheses,
    weights=(0.25, 0.25, 0.25, 0.25),
    smoothing_function=SmoothingFunction().method1
)

BLEU measures n-gram overlap between generated and reference text. Range 0–1 (or 0–100). Good for translation, summarization, structured generation. Poor for open generation with multiple correct variants.

ROUGE (Recall-Oriented Understudy for Gisting Evaluation):

from rouge_score import rouge_scorer

scorer = rouge_scorer.RougeScorer(['rouge1', 'rouge2', 'rougeL'], use_stemmer=True)

scores = scorer.score(reference, hypothesis)
# scores.rouge1.fmeasure, scores.rouge2.fmeasure, scores.rougeL.fmeasure
  • ROUGE-1: unigram overlap
  • ROUGE-2: bigram overlap
  • ROUGE-L: longest common subsequence (considers order)

ROUGE is better than BLEU for summarization tasks.

METEOR — better than BLEU for Russian language, accounts for morphological variants:

from nltk.translate.meteor_score import meteor_score
score = meteor_score([reference.split()], hypothesis.split())

Perplexity: model confidence metric

Perplexity measures how "surprised" the model is by test data:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

def compute_perplexity(model, tokenizer, texts: list[str]) -> float:
    total_loss = 0
    total_tokens = 0

    model.eval()
    with torch.no_grad():
        for text in texts:
            encodings = tokenizer(text, return_tensors="pt").to(model.device)
            outputs = model(**encodings, labels=encodings["input_ids"])
            total_loss += outputs.loss.item() * encodings["input_ids"].shape[1]
            total_tokens += encodings["input_ids"].shape[1]

    avg_loss = total_loss / total_tokens
    return torch.exp(torch.tensor(avg_loss)).item()

# Application
ppl = compute_perplexity(model, tokenizer, test_texts)
print(f"Perplexity: {ppl:.2f}")

Decrease in perplexity on test set after fine-tuning means the model better "understands" the target domain. Increase in perplexity on general benchmark — sign of catastrophic forgetting.

Metrics for classification and extraction tasks

from sklearn.metrics import classification_report, f1_score
import json

def evaluate_classification(model_outputs: list, ground_truth: list) -> dict:
    """Evaluate classification via LLM"""
    predictions = []
    for output in model_outputs:
        try:
            # Assume JSON output with "category" field
            pred = json.loads(output)["category"]
        except:
            pred = "parse_error"
        predictions.append(pred)

    report = classification_report(ground_truth, predictions, output_dict=True)
    return {
        "macro_f1": report["macro avg"]["f1-score"],
        "weighted_f1": report["weighted avg"]["f1-score"],
        "accuracy": report["accuracy"],
        "per_class": {k: v for k, v in report.items() if isinstance(v, dict) and k not in ["macro avg", "weighted avg"]}
    }

LLM-as-judge: practical implementation

from openai import OpenAI

JUDGE_PROMPT = """You are a strict expert evaluating the quality of AI assistant responses.

Question: {question}

Assistant answer: {answer}

Reference answer: {reference}

Evaluate the answer by criteria (each 1–5):
1. Factual accuracy
2. Topic coverage completeness
3. Structure
4. Style compliance

Return JSON: {{"accuracy": X, "completeness": X, "structure": X, "style": X, "overall": X, "reasoning": "..."}}"""

def llm_judge(question: str, answer: str, reference: str, client: OpenAI) -> dict:
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{
            "role": "user",
            "content": JUDGE_PROMPT.format(question=question, answer=answer, reference=reference)
        }],
        response_format={"type": "json_object"},
        temperature=0.1
    )
    return json.loads(response.choices[0].message.content)

Practical example: comprehensive fine-tuned model evaluation

Base model: Llama 3.1 8B Instruct. Fine-tuned model: QLoRA r=16, 2000 legal document examples.

Metric Base model Fine-tuned Change
ROUGE-L 0.41 0.67 +63%
BLEU-4 0.18 0.39 +117%
Perplexity (domain) 24.3 11.8 -51%
Perplexity (MMLU) 8.2 9.1 +11% (forgetting)
LLM-judge overall 3.1 4.3 +39%
F1 (NER categories) 0.61 0.89 +46%

Perplexity on MMLU increased by 11% — moderate catastrophic forgetting. Acceptable for narrow specialized use-case.

Post-deployment monitoring

import mlflow

# Automatic logging with each request
def log_inference_quality(prompt, response, user_feedback):
    with mlflow.start_run(run_name="production-monitoring"):
        mlflow.log_metrics({
            "response_length": len(response.split()),
            "refusal_detected": int("cannot" in response.lower()),
            "user_rating": user_feedback.get("rating", -1),
        })

Evaluation timeline

  • Evaluation pipeline development: 3–5 days
  • Automatic evaluation (all metrics): several hours
  • LLM-as-judge (1000 examples): 1–2 days (cost ~$5–20)
  • Human evaluation (200 examples): 1 week
  • Total per iteration evaluation: 1–2 weeks