Synthetic Data Platform Development

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
Synthetic Data Platform Development
Complex
from 1 week to 3 months
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

Development of a synthetic data platform

A synthetic data platform is a system for generating artificial, yet statistically realistic, data that can be used to train AI models, test systems, and share data without privacy risks. It is particularly relevant in healthcare, finance, and telecom, where real data is strictly regulated.

Platform architecture

┌─────────────────────────────────────────────────────────┐
│                   Data Ingestion Layer                    │
│  [Real Data] → [Privacy Scan] → [Statistical Profiling]  │
└─────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────┐
│                  Generation Engine                        │
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐  │
│  │ Tabular (GAN)│  │  Text (LLM)  │  │ Image (Diff) │  │
│  └──────────────┘  └──────────────┘  └──────────────┘  │
└─────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────┐
│                  Quality Validation                       │
│  [Statistical Fidelity] [Privacy Audit] [ML Utility]    │
└─────────────────────────────────────────────────────────┘
                            ↓
┌─────────────────────────────────────────────────────────┐
│                   Delivery Layer                          │
│  [API] → [Data Catalog] → [Access Control] → [Audit]   │
└─────────────────────────────────────────────────────────┘

Generating tabular data

CTGAN (Conditional Tabular GAN) is the most mature method:

from sdv.single_table import CTGANSynthesizer
from sdv.metadata import SingleTableMetadata
import pandas as pd

# Метаданные реальной таблицы
metadata = SingleTableMetadata()
metadata.detect_from_dataframe(real_df)

# Дополнительные аннотации
metadata.update_column('patient_id', sdtype='id')
metadata.update_column('age', sdtype='numerical', computer_representation='Int64')
metadata.update_column('diagnosis', sdtype='categorical')
metadata.update_column('admission_date', sdtype='datetime')

# Обучение синтезатора
synthesizer = CTGANSynthesizer(
    metadata,
    epochs=500,
    batch_size=500,
    generator_dim=(256, 256),
    discriminator_dim=(256, 256),
    verbose=True
)
synthesizer.fit(real_df)

# Генерация 100,000 синтетических записей
synthetic_df = synthesizer.sample(num_rows=100_000)

Gaussian Copula — faster than CTGAN, better at preserving correlations:

from sdv.single_table import GaussianCopulaSynthesizer

synthesizer = GaussianCopulaSynthesizer(metadata)
synthesizer.fit(real_df)
synthetic_df = synthesizer.sample(num_rows=100_000)

Generating linked tables

from sdv.multi_table import HMASynthesizer
from sdv.metadata import MultiTableMetadata

metadata = MultiTableMetadata()
metadata.detect_from_dataframes({
    'patients': patients_df,
    'diagnoses': diagnoses_df,
    'prescriptions': prescriptions_df
})

# Связи между таблицами
metadata.add_relationship(
    parent_table_name='patients',
    parent_primary_key='patient_id',
    child_table_name='diagnoses',
    child_foreign_key='patient_id'
)

synthesizer = HMASynthesizer(metadata)
synthesizer.fit({'patients': patients_df, 'diagnoses': diagnoses_df})
synthetic_data = synthesizer.sample(scale=1.5)

Privacy: Membership Inference Attack Protection

from sdmetrics.reports.single_table import QualityReport
from sdmetrics.single_table import NewRowSynthesis

# Тест: насколько синтетические данные похожи на конкретные реальные записи
# (privacy audit)
new_row_score = NewRowSynthesis.compute(
    real_data=real_df,
    synthetic_data=synthetic_df,
    metadata=metadata,
    numerical_match_tolerance=0.01
)
# Цель: score > 0.9 (синтетические данные не воспроизводят реальные записи)

Assessing the quality of synthetic data

from sdmetrics.reports.single_table import QualityReport

report = QualityReport()
report.generate(real_df, synthetic_df, metadata.to_dict())

# Компоненты оценки:
# Column Shapes: совпадение распределений отдельных колонок
# Column Pair Trends: совпадение корреляций между колонками
# Score 0.9+ считается высоким качеством для ML utility
print(report.get_score())  # Общий score 0-1
report.get_details(property_name='Column Shapes')

ML Utility test

# Train-on-Synthetic, Test-on-Real (TSTR)
model_on_real = train_classifier(real_train, real_val)
model_on_synthetic = train_classifier(synthetic_train, real_val)

# Разница AUC должна быть < 2-3%
print(f"Real data AUC: {model_on_real.auc:.4f}")
print(f"Synthetic data AUC: {model_on_synthetic.auc:.4f}")
print(f"ML Utility gap: {(model_on_real.auc - model_on_synthetic.auc):.4f}")

Timing and technology stack

Full platform: 3-4 months. Includes: web UI for self-service generation, API for programmatic access, integration with the existing Data Catalog, automatic privacy audit and ML utility report for each generated dataset, role-based access control.

Tech stack: FastAPI backend, React frontend, PostgreSQL for metadata, S3/MinIO for synthetic datasets, Airflow for generation orchestration.