Voice Preservation Speech-to-Speech (Voice Preservation S2S) translates speech into another language while preserving the timbre, accent, and voice characteristics of the original speaker. This is fundamentally more complex than standard S2S, which uses a fixed TTS voice. ### Voice Preservation Pipeline Components
Source Audio
↓
[1] Speaker Encoder → Speaker Embedding (d-vector)
↓
[2] STT → Transcript (source language)
↓
[3] Machine Translation → Transcript (target language)
↓
[4] TTS with Voice Conversion → Output Audio
(использует speaker embedding из шага 1)
```### Extract speaker embedding```python
from speechbrain.pretrained import EncoderClassifier
import torchaudio
import torch
encoder = EncoderClassifier.from_hparams(
source="speechbrain/spkrec-ecapa-voxceleb",
savedir="tmp_encoder"
)
def extract_speaker_embedding(audio_path: str) -> torch.Tensor:
signal, sr = torchaudio.load(audio_path)
if sr != 16000:
signal = torchaudio.functional.resample(signal, sr, 16000)
embedding = encoder.encode_batch(signal)
return embedding.squeeze() # (192,) вектор
```### Zero-shot TTS with conditioning on embedding XTTS v2 takes reference audio and conditions it on it during synthesis:```python
from TTS.api import TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to("cuda")
async def voice_preserving_translate(
source_audio: str,
target_language: str,
target_text: str
) -> np.ndarray:
# XTTS использует source_audio для извлечения голосовых характеристик
wav = tts.tts(
text=target_text,
speaker_wav=source_audio, # исходное аудио как референс голоса
language=target_language
)
return np.array(wav)
```### SeamlessM4T (Meta) — Meta's end-to-end approach SeamlessM4T supports S2ST with partial prosody preservation:```python
from transformers import SeamlessM4Tv2ForSpeechToSpeech, AutoProcessor
import torchaudio
processor = AutoProcessor.from_pretrained("facebook/seamless-m4t-v2-large")
model = SeamlessM4Tv2ForSpeechToSpeech.from_pretrained(
"facebook/seamless-m4t-v2-large"
).to("cuda")
audio, sr = torchaudio.load("source.wav")
inputs = processor(audios=audio, src_lang="rus", return_tensors="pt").to("cuda")
with torch.no_grad():
output = model.generate(**inputs, tgt_lang="eng")
translated_audio = output[0].cpu().numpy().squeeze()
```Supports 100+ languages, 1-3 seconds delay on long fragments. ### Voice Preservation Quality | Approach | SECS | Perceptual Score | |--------|------|-----------------| | SeamlessM4T | 0.60–0.70 | 3.2–3.5 | | XTTS v2 zero-shot | 0.78–0.88 | 3.8–4.2 | | Fine-tuned XTTS | 0.88–0.93 | 4.2–4.5 | Deadlines: pipeline with XTTS — 2 weeks. With SeamlessM4T + fine-tuning — 4–6 weeks.







