AI Waybill and Act Data Extraction

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
AI Waybill and Act Data Extraction
Medium
~3-5 business days
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

AI Data Extraction from Waybills and Acts

Transport waybills (TTN, CMR), goods delivery notes (TORG-12), completion reports — documents with rigid structure, but large filling variability: handwritten fields, stamps over text, low-quality scans, mixed filling (part of fields — typewritten, part — handwritten).

NER Task on LayoutLM for Waybills

A waybill is a tabular document: header (party details), goods table, signatures. LayoutLMv3 processes all this through token classification accounting for text coordinates.

from transformers import LayoutLMv3Processor, LayoutLMv3ForTokenClassification
from datasets import Dataset
import torch

# Complete set of labels for TORG-12 / TTN
WAYBILL_LABELS = [
    'O',
    'B-DOC_NUMBER', 'I-DOC_NUMBER',
    'B-DOC_DATE',   'I-DOC_DATE',
    'B-SENDER_NAME',    'I-SENDER_NAME',
    'B-SENDER_INN',     'I-SENDER_INN',
    'B-SENDER_ADDRESS', 'I-SENDER_ADDRESS',
    'B-RECEIVER_NAME',    'I-RECEIVER_NAME',
    'B-RECEIVER_INN',     'I-RECEIVER_INN',
    'B-RECEIVER_ADDRESS', 'I-RECEIVER_ADDRESS',
    'B-CARRIER_NAME',   'I-CARRIER_NAME',
    'B-VEHICLE_REG',    'I-VEHICLE_REG',     # vehicle registration number
    'B-ITEM_NAME',      'I-ITEM_NAME',
    'B-ITEM_QTY',       'I-ITEM_QTY',
    'B-ITEM_UNIT',      'I-ITEM_UNIT',
    'B-ITEM_PRICE',     'I-ITEM_PRICE',
    'B-ITEM_TOTAL',     'I-ITEM_TOTAL',
    'B-TOTAL_QTY',      'I-TOTAL_QTY',
    'B-TOTAL_AMOUNT',   'I-TOTAL_AMOUNT',
    'B-DRIVER_NAME',    'I-DRIVER_NAME',
]

def prepare_waybill_dataset(
    image_paths: list,
    annotations: list,    # list of dicts with keys: words, boxes, labels
    processor: LayoutLMv3Processor
) -> Dataset:
    """
    Preparation of dataset for fine-tuning.
    annotations[i]['boxes']: normalized bbox [0..1000] for LayoutLM.
    """
    label2id = {l: i for i, l in enumerate(WAYBILL_LABELS)}

    features_list = []
    for img_path, ann in zip(image_paths, annotations):
        from PIL import Image as PILImage
        image = PILImage.open(img_path).convert('RGB')

        encoding = processor(
            image,
            text=ann['words'],
            boxes=ann['boxes'],
            word_labels=[label2id[l] for l in ann['labels']],
            truncation=True,
            padding='max_length',
            max_length=512,
            return_tensors='pt'
        )
        features_list.append({
            k: v.squeeze().tolist() for k, v in encoding.items()
        })

    return Dataset.from_list(features_list)

Handling Handwritten Fields

Waybills often have handwritten signatures, dates, quantities. PaddleOCR or TrOCR for printed text on handwritten fields makes errors. Need handwriting detector + separate handwritten OCR:

from transformers import TrOCRProcessor, VisionEncoderDecoderModel
from PIL import Image
import torch

class HandwritingOCR:
    def __init__(self):
        self.processor = TrOCRProcessor.from_pretrained(
            'microsoft/trocr-base-handwritten'
        )
        self.model = VisionEncoderDecoderModel.from_pretrained(
            'microsoft/trocr-base-handwritten'
        ).eval().cuda()

    @torch.no_grad()
    def recognize(self, image: Image.Image) -> str:
        pixel_values = self.processor(
            image, return_tensors='pt'
        ).pixel_values.to('cuda')

        generated_ids = self.model.generate(
            pixel_values,
            max_new_tokens=64,
            num_beams=4,
            early_stopping=True
        )
        return self.processor.batch_decode(
            generated_ids, skip_special_tokens=True
        )[0]

class HybridWaybillOCR:
    """
    Determine text type (print / handwriting) → choose OCR.
    Handwriting indicators: large height variance, no serif patterns.
    """
    def __init__(self):
        self.handwriting_ocr = HandwritingOCR()
        # PaddleOCR for printed text
        from paddleocr import PaddleOCR
        self.printed_ocr = PaddleOCR(use_angle_cls=True, lang='en')

    def is_handwritten(self, text_region: Image.Image) -> bool:
        """Simple heuristic: variance of stroke width"""
        import numpy as np
        img_array = np.array(text_region.convert('L'))
        # Binarization
        _, binary = cv2.threshold(img_array, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
        # Variance of line width as handwriting indicator
        col_density = (binary == 0).mean(axis=0)
        return float(col_density.std()) > 0.15   # empirical threshold

    def recognize_region(self, image: Image.Image) -> str:
        if self.is_handwritten(image):
            return self.handwriting_ocr.recognize(image)
        else:
            result = self.printed_ocr.ocr(np.array(image))
            return ' '.join([line[1][0] for line in result[0] or []])

Credential Validation

import re

def validate_russian_inn(inn: str) -> bool:
    """Check INN control digit (RF)"""
    if not re.match(r'^\d{10}$|^\d{12}$', inn):
        return False
    digits = [int(d) for d in inn]
    if len(inn) == 10:
        check = sum(d * w for d, w in zip(digits[:9], [2,4,10,3,5,9,4,6,8])) % 11 % 10
        return digits[9] == check
    else:
        c1 = sum(d * w for d, w in zip(digits[:11], [7,2,4,10,3,5,9,4,6,8,0])) % 11 % 10
        c2 = sum(d * w for d, w in zip(digits[:11], [3,7,2,4,10,3,5,9,4,6,8])) % 11 % 10
        return digits[10] == c1 and digits[11] == c2

Timeline

Task Timeline
Field extraction TORG-12 / TTN (standard formats) 2–3 weeks
Fine-tuning LayoutLMv3 on corporate waybills 5–7 weeks
Complete system with handwriting + validation + 1C integration 8–14 weeks