Speech-to-Speech with Speaker Voice Preservation Implementation

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
Speech-to-Speech with Speaker Voice Preservation Implementation
Complex
from 2 weeks to 3 months
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

Voice Preservation Speech-to-Speech (Voice Preservation S2S) translates speech into another language while preserving the timbre, accent, and voice characteristics of the original speaker. This is fundamentally more complex than standard S2S, which uses a fixed TTS voice. ### Voice Preservation Pipeline Components

Source Audio
     ↓
[1] Speaker Encoder → Speaker Embedding (d-vector)
     ↓
[2] STT → Transcript (source language)
     ↓
[3] Machine Translation → Transcript (target language)
     ↓
[4] TTS with Voice Conversion → Output Audio
     (использует speaker embedding из шага 1)
```### Extract speaker embedding```python
from speechbrain.pretrained import EncoderClassifier
import torchaudio
import torch

encoder = EncoderClassifier.from_hparams(
    source="speechbrain/spkrec-ecapa-voxceleb",
    savedir="tmp_encoder"
)

def extract_speaker_embedding(audio_path: str) -> torch.Tensor:
    signal, sr = torchaudio.load(audio_path)
    if sr != 16000:
        signal = torchaudio.functional.resample(signal, sr, 16000)
    embedding = encoder.encode_batch(signal)
    return embedding.squeeze()  # (192,) вектор
```### Zero-shot TTS with conditioning on embedding XTTS v2 takes reference audio and conditions it on it during synthesis:```python
from TTS.api import TTS

tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to("cuda")

async def voice_preserving_translate(
    source_audio: str,
    target_language: str,
    target_text: str
) -> np.ndarray:
    # XTTS использует source_audio для извлечения голосовых характеристик
    wav = tts.tts(
        text=target_text,
        speaker_wav=source_audio,  # исходное аудио как референс голоса
        language=target_language
    )
    return np.array(wav)
```### SeamlessM4T (Meta) — Meta's end-to-end approach SeamlessM4T supports S2ST with partial prosody preservation:```python
from transformers import SeamlessM4Tv2ForSpeechToSpeech, AutoProcessor
import torchaudio

processor = AutoProcessor.from_pretrained("facebook/seamless-m4t-v2-large")
model = SeamlessM4Tv2ForSpeechToSpeech.from_pretrained(
    "facebook/seamless-m4t-v2-large"
).to("cuda")

audio, sr = torchaudio.load("source.wav")
inputs = processor(audios=audio, src_lang="rus", return_tensors="pt").to("cuda")

with torch.no_grad():
    output = model.generate(**inputs, tgt_lang="eng")

translated_audio = output[0].cpu().numpy().squeeze()
```Supports 100+ languages, 1-3 seconds delay on long fragments. ### Voice Preservation Quality | Approach | SECS | Perceptual Score | |--------|------|-----------------| | SeamlessM4T | 0.60–0.70 | 3.2–3.5 | | XTTS v2 zero-shot | 0.78–0.88 | 3.8–4.2 | | Fine-tuned XTTS | 0.88–0.93 | 4.2–4.5 | Deadlines: pipeline with XTTS — 2 weeks. With SeamlessM4T + fine-tuning — 4–6 weeks.