AI Real-Time Captioning for Hearing Impaired

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
AI Real-Time Captioning for Hearing Impaired
Medium
from 1 business day to 3 business days
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

Developing Real-Time AI Captioning for the Hearing Impaired Real-time captioning (live captions) is a technical aid in accordance with GOST R 52872-2019 and the international standard WCAG 2.1 (criterion 1.2.4). It is used in broadcasts, conferences, television, and educational platforms. ### Real-time STT stack For subtitles with a delay of < 2 seconds from the speech:

import asyncio
import websockets
from faster_whisper import WhisperModel
import numpy as np
import sounddevice as sd

class RealTimeCaptioner:
    def __init__(self):
        self.model = WhisperModel(
            "large-v3",
            device="cuda",
            compute_type="float16"
        )
        self.buffer = []
        self.chunk_duration = 3.0  # секунды буферизации
        self.sample_rate = 16000

    async def stream_captions(self, websocket, audio_queue: asyncio.Queue):
        """Стриминг субтитров через WebSocket"""
        while True:
            chunk = await audio_queue.get()
            self.buffer.append(chunk)

            buffer_duration = len(self.buffer) * len(chunk) / self.sample_rate

            if buffer_duration >= self.chunk_duration:
                audio_data = np.concatenate(self.buffer)
                self.buffer = []

                segments, _ = self.model.transcribe(
                    audio_data,
                    language="ru",
                    vad_filter=True,
                    vad_parameters={"min_silence_duration_ms": 500}
                )

                for segment in segments:
                    caption = {
                        "text": segment.text.strip(),
                        "start": segment.start,
                        "end": segment.end,
                        "confidence": segment.avg_logprob
                    }
                    await websocket.send(json.dumps(caption, ensure_ascii=False))
```### WebRTC integration for browser```javascript
// Клиентская часть: захват аудио и стриминг на сервер
class LiveCaptionClient {
    constructor(wsUrl) {
        this.ws = new WebSocket(wsUrl);
        this.captionDiv = document.getElementById('captions');
    }

    async startCapturing() {
        const stream = await navigator.mediaDevices.getUserMedia({
            audio: { sampleRate: 16000, channelCount: 1, echoCancellation: true }
        });

        const audioContext = new AudioContext({ sampleRate: 16000 });
        const processor = audioContext.createScriptProcessor(4096, 1, 1);

        processor.onaudioprocess = (event) => {
            const pcmData = event.inputBuffer.getChannelData(0);
            const int16Array = new Int16Array(pcmData.length);
            for (let i = 0; i < pcmData.length; i++) {
                int16Array[i] = Math.max(-32768, Math.min(32767, pcmData[i] * 32768));
            }
            if (this.ws.readyState === WebSocket.OPEN) {
                this.ws.send(int16Array.buffer);
            }
        };

        this.ws.onmessage = (event) => {
            const caption = JSON.parse(event.data);
            this.displayCaption(caption.text);
        };

        const source = audioContext.createMediaStreamSource(stream);
        source.connect(processor);
        processor.connect(audioContext.destination);
    }

    displayCaption(text) {
        // Отображение с rolling-window (последние 2-3 строки)
        const line = document.createElement('p');
        line.textContent = text;
        line.className = 'caption-line';
        this.captionDiv.appendChild(line);

        // Убираем старые строки
        while (this.captionDiv.children.length > 3) {
            this.captionDiv.removeChild(this.captionDiv.firstChild);
        }

        // Auto-scroll
        this.captionDiv.scrollTop = this.captionDiv.scrollHeight;
    }
}
```### Display Requirements (WCAG 2.1)```css
/* Субтитры для слабослышащих — WCAG 2.1 критерий 1.4.3 */
.caption-container {
    background-color: rgba(0, 0, 0, 0.85);
    color: #FFFFFF;
    font-size: 1.5rem;           /* минимум 24px */
    line-height: 1.6;
    padding: 12px 20px;
    border-radius: 4px;
    max-width: 80%;
    font-family: Arial, sans-serif;  /* высокая разборчивость */
}

/* Высокий контраст (коэффициент 7:1 для AA+) */
.caption-line {
    color: #FFFFFF;
    text-shadow: 1px 1px 2px #000;
}
```### Integration with Zoom/Teams via Bot```python
# Zoom использует RTMP для стриминга субтитров
import httpx

async def push_zoom_captions(meeting_id: str, caption_text: str, seq: int):
    """Отправляем субтитры в Zoom через Closed Caption API"""
    async with httpx.AsyncClient() as client:
        await client.post(
            f"https://api.zoom.us/v2/meetings/{meeting_id}/live_streaming/captions",
            json={"text": caption_text, "seq": seq, "lang": "ru-RU"},
            headers={"Authorization": f"Bearer {ZOOM_JWT_TOKEN}"}
        )
```### System Latency | Component | Typical Latency | |-----------|------------------| | Audio Buffering | 2–3 sec | | Faster-whisper Inference | 0.3–0.8 sec | | WebSocket Transfer | &lt; 50 ms | | **Total End-to-End** | **2.5–4 sec** | To reduce latency to &lt; 1.5 sec, streaming transcription with partial results via AssemblyAI or Deepgram Nova-2 (support incremental transcription) is used. Timeframe: Captioning web component — 1–2 weeks. Integration with Zoom/Teams/broadcasting platform — 2–3 weeks.