Setting Up a Geo-Distributed Cluster for 1C-Bitrix
A single data center in Moscow introduces 80–120 ms of latency for users from Novosibirsk and 150–200 ms from Almaty. Under high load from multiple regions, this compounds: a page with 30+ API requests takes 3–4 seconds to load instead of 1. A geo-distributed cluster solves this by routing users to the nearest node — but for Bitrix this is non-trivial, as it requires managing distributed state.
Geo-Cluster Architecture for Bitrix
A typical two-region setup (Moscow + one additional region):
[GeoDNS / Anycast BGP]
/ \
[Region-MSK] [Region-EKB]
Web-1, Web-2 Web-3, Web-4
Redis-1 (master) Redis-2 (replica)
[DB Master] <--> [DB Replica]
[File Storage] rsync [File Storage Mirror]
Key decisions to make:
- Where is the DB master? — in one region only. Writes always go to the master; reads can be distributed.
- How to synchronize files? — real-time replication is expensive; rsync every 1–5 minutes is typical.
- How to manage sessions? — via Redis with cross-region replication.
- What to do if the inter-region link fails? — decide: operate only from the master region, or allow data divergence?
GeoDNS: User Routing
The simplest layer — DNS by geolocation. Cloudflare, AWS Route 53, Yandex Cloud DNS.
; Zone example with geo-routing
; European users -> MSK nodes
@ 300 IN A 185.10.1.100 ; geo: EU, RU-west
@ 300 IN A 195.20.2.100 ; geo: RU-east, KZ
GeoDNS limitation: TTL affects failover switching speed. With a TTL of 300 seconds, the client caches for 5 minutes. For fast failover — Anycast BGP (one IP, different servers in different locations, routing at the network level).
Configuring the cluster Module in Bitrix
Bitrix ships the cluster module (Bitrix Web Cluster), which manages distributed nodes. Key settings are in /bitrix/.settings.php:
'connections' => [
'value' => [
'default' => [
'className' => '\\Bitrix\\Main\\DB\\MysqlCommonConnection',
'host' => '10.0.1.10', // master (MSK)
'port' => 3306,
'database' => 'bitrix_db',
'login' => 'bitrix',
'password' => '***',
'options' => 2,
],
'slave' => [
'className' => '\\Bitrix\\Main\\DB\\MysqlCommonConnection',
'host' => '10.0.2.10', // replica (EKB)
'port' => 3306,
'database' => 'bitrix_db',
'login' => 'bitrix_ro',
'password' => '***',
'options' => 2,
],
],
],
Read queries are directed to the replica via:
\Bitrix\Main\Application::getConnection('slave')->query("SELECT ...");
Standard Bitrix APIs (D7 ORM, CIBlockElement::GetList) use the default connection. Automatic read/write splitting requires an intermediate layer — ProxySQL or a custom wrapper.
Cross-Region DB Replication
MySQL GTID replication over an encrypted channel (stunnel or WireGuard):
# On the master (MSK)
[mysqld]
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
gtid_mode = ON
enforce_gtid_consistency = ON
binlog_format = ROW
# On the replica (EKB)
[mysqld]
server-id = 2
gtid_mode = ON
enforce_gtid_consistency = ON
read_only = ON
relay_log = /var/log/mysql/relay-bin.log
-- On the replica
CHANGE MASTER TO
MASTER_HOST='10.0.1.10',
MASTER_USER='replication',
MASTER_PASSWORD='***',
MASTER_AUTO_POSITION = 1,
MASTER_SSL = 1;
START SLAVE;
SHOW SLAVE STATUS\G
-- Seconds_Behind_Master: 0 — replica has caught up to the master
Cross-region replication lag is normally 50–200 ms on a 100 Mbps link with 20–30 ms RTT. A critical edge case: if a user creates an order (write to master) and immediately reads its status from the replica — with a lag >200 ms they may not see the order. Solution: route operations after a write for a specific session to the master for 5–10 seconds.
File Synchronization
upload/ files (media, documents) must be available on all nodes. Options:
Option 1: S3-compatible storage (recommended)
Store upload/ in S3 (Yandex Object Storage, AWS S3, MinIO). Bitrix can work with S3 via the bitrix.cloud module or a custom file handler. A CDN in front of S3 serves files from the nearest region.
Option 2: Bidirectional rsync
# Sync MSK -> EKB every minute
*/1 * * * * rsync -az --delete \
/var/www/bitrix/upload/ \
ekb-storage:/var/www/bitrix/upload/
# Reverse sync for files uploaded to the EKB node
*/1 * * * * rsync -az \
ekb-storage:/var/www/bitrix/upload/new/ \
/var/www/bitrix/upload/new/
The problem with bidirectional rsync is conflicts when files are uploaded simultaneously in both regions. For production it is better to prohibit file uploads on regional nodes — all uploads are proxied to the master region.
Redis: Distributed Sessions and Cache
User sessions in a geo-cluster must be accessible regardless of which node handles the next request:
// /bitrix/.settings.php
'session' => [
'value' => [
'mode' => 'separated',
'handlers' => [
'general' => [
'type' => 'redis',
'host' => '10.0.1.20', // Redis MSK (master)
'port' => 6379,
],
],
],
],
'cache' => [
'value' => [
'type' => 'redis',
'redis' => [
'host' => '10.0.1.20',
'port' => 6379,
],
'sid' => 'bitrix_geo',
],
],
Redis Sentinel or Redis Cluster for HA. In a geo-distributed setup, Redis replicates asynchronously. Cache can be stored locally in each region (saves bandwidth); sessions — only in the master region or in Redis Cluster with cross-region replication.
Traffic Splitting: What Can Be Regionalized and What Cannot
| Operation | Available on regional node | Note |
|---|---|---|
| Catalog read | Yes | From DB replica |
| Product/category page | Yes | From cache or replica |
| Search | Yes | Elasticsearch with replication |
| Add to cart | No | Master only |
| Checkout | No | Master only + master DB |
| File upload | No | S3 or master node only |
| Authentication | No | Sessions — master Redis |
For a Bitrix store this means: catalog pages are served from the nearest region; checkout is always proxied to the master region. Split-routing is implemented at the nginx level:
location /bitrix/components/bitrix/sale. {
proxy_pass http://msk_master; # orders always go to MSK
}
location / {
proxy_pass http://geo_cluster; # everything else — nearest node
}
Setup Timeline
| Phase | Content | Duration |
|---|---|---|
| Architecture design | Scheme, solution choices, RPO/RTO agreement | 2–3 days |
| DB replication setup | GTID, lag monitoring, failover test | 2–3 days |
| Redis + sessions setup | Sentinel/Cluster, .settings.php | 1–2 days |
| File synchronization | S3 or rsync + nginx configs | 1–2 days |
| GeoDNS + load balancer | Cloudflare/Route53, split-routing nginx | 1–2 days |
| Load testing and drill | Failover verification, latency measurement | 2–3 days |







