Implementing Live Stream Viewer Chat in Mobile Applications
Live stream chat is not just "send message". At 10,000 simultaneous viewers, standard approach with Firebase Firestore onSnapshot creates 10,000 open listeners, and your Firebase bill shoots to space. At 1000 — UX still broken if messages arrive faster than user reads.
High-Load Architecture
For live chat key decision — fan-out on server, not client. Client subscribes to single WebSocket/SSE channel and gets aggregated stream. Doesn't open listener per message.
Stack that works at 5000+ simultaneous viewers:
-
Transport: WebSocket (Socket.io or raw
ws) or Server-Sent Events (SSE) - Buffer: Redis Pub/Sub for message distribution between server instances
- Throttling: on server — max 50–100 messages/sec to one channel, excess aggregated or dropped
- Client render: virtualized list with limited depth (last 100–200 messages)
React Native: Virtualized Chat
FlatList with inverted — standard chat solution. But on fast stream (10+ messages/sec) setState per message kills FPS. Batching mandatory:
const BATCH_INTERVAL_MS = 250;
const [messages, setMessages] = useState<Message[]>([]);
const pendingRef = useRef<Message[]>([]);
useEffect(() => {
const socket = new WebSocket(CHAT_WS_URL);
socket.onmessage = (event) => {
const msg: Message = JSON.parse(event.data);
pendingRef.current.push(msg);
};
const flushInterval = setInterval(() => {
if (pendingRef.current.length === 0) return;
const batch = pendingRef.current.splice(0);
setMessages(prev => {
const updated = [...batch.reverse(), ...prev];
return updated.slice(0, 200); // keep max 200 messages in memory
});
}, BATCH_INTERVAL_MS);
return () => {
clearInterval(flushInterval);
socket.close();
};
}, []);
pendingRef — not state, so writing doesn't trigger rerender. Every 250ms flush accumulated batch to state in single setState. On iPhone SE 2020 holds 60 FPS at 30 messages/sec stream.
Moderation and Anti-spam
Live chat without moderation — spam magnet. Minimum set:
- Rate limiting on client: send button blocked 2–3 seconds after send. Not replace server rate limiting, but reduces random spam.
-
Server filtering:
bad-wordslibrary or custom regex before Redis write. - Slow mode: configurable interval between messages per user (30–60 sec for unverified accounts).
- Mute and ban: soft-ban via Redis SET with TTL, store not in DB — load too high.
Donations and Supers: Highlighted Messages
Superchat (paid highlight) — separate channel with different priority. Highlighted messages don't participate in throttling, display separate component above main chat and stay visible N seconds regardless of stream speed.
In React Native: absolutely positioned View with appearance/disappearance animation (Animated.timing or react-native-reanimated). Superchat queue — separate state, independent from main list.
Reconnect and Missed Messages
On connection break 10 seconds, user missed N messages. Two approaches:
- Ignore gap — on reconnect continue from now. Simpler, but user sees "hole" in chat.
-
Backfill — on reconnect request messages during absence. REST endpoint
/chat/history?after=<timestamp>&limit=50. Limit it — don't load 5000 missed messages.
For live chat approach 1 acceptable. User understands missed part — normal for live.
Assessment
WebSocket chat with batching, rate limiting and basic moderation in React Native: 3–5 weeks. With donation system and message history: 5–8 weeks.







