Configuring Horizontal Scaling for 1C-Bitrix
Horizontal scaling means adding nodes instead of increasing the resources of a single server. For 1C-Bitrix, this means running multiple web servers behind a load balancer with a shared database and shared file storage. Licensing-wise, this is permitted only for the "Small Business" edition and above; the "Start" and "Standard" editions do not support clustering.
A typical trigger is peak load (sales events, ad campaigns) where a single server swaps or maxes out CPU, and a simple horizontal scale-out fails due to inconsistent state between nodes.
What Must Be Solved for Horizontal Scaling
Three key problems:
- Sessions — PHP sessions are stored on disk by default. If requests from the same user are balanced to different servers, the session is lost.
- Files — user-uploaded files and cache-generated files must be accessible to all nodes.
-
Bitrix cache — the managed cache (
/bitrix/cache/) is stored on disk. When cache is cleared on one node, the others continue serving stale data.
Infrastructure Diagram
Internet
→ Load Balancer (nginx / HAProxy / cloud LB)
├─ Web Node 1 (nginx + php-fpm)
├─ Web Node 2 (nginx + php-fpm)
└─ Web Node N (nginx + php-fpm)
↓ shared resources
├─ MySQL Master (write) + MySQL Slave (read)
├─ Redis Cluster (sessions + Bitrix cache)
└─ NFS / GlusterFS / S3 (shared file volume)
Sessions in Redis
Bitrix natively supports storing sessions in Memcached and Redis. Configuration in /bitrix/.settings.php:
return [
'session' => [
'value' => [
'mode' => 'default',
'handlers' => [
'general' => [
'type' => 'redis',
'host' => '127.0.0.1', // or Redis Sentinel/Cluster address
'port' => 6379,
'serializer' => \Redis::SERIALIZER_PHP,
],
],
],
],
];
For Redis Sentinel (high availability):
'general' => [
'type' => 'redis',
'sentinels' => [
['host' => 'sentinel-1', 'port' => 26379],
['host' => 'sentinel-2', 'port' => 26379],
['host' => 'sentinel-3', 'port' => 26379],
],
'master_name' => 'mymaster',
],
Bitrix Cache in Redis
Move the managed cache from the filesystem to Redis:
// /bitrix/.settings.php — cache section
'cache' => [
'value' => [
'type' => 'redis',
'redis' => [
'host' => '127.0.0.1',
'port' => 6379,
'serializer' => \Redis::SERIALIZER_IGBINARY, // faster than PHP serializer
],
],
],
igbinary requires the igbinary PHP extension to be installed. It provides ~40% data compression compared to PHP serialize and is faster on deserialization.
For HTML page cache (composite cache, bitrix:page.polycore), Redis is less efficient since those objects are large. It is better to keep them on NFS or use nginx proxy_cache at the load balancer level.
Shared File Volume
Directories that must be shared:
-
/upload/— all user-uploaded files -
/bitrix/cache/— if cache is file-based (not Redis) -
/bitrix/managed_cache/— same -
/bitrix/html_pages/— HTML page cache
NFS — the simplest option. On a dedicated server or NAS:
# On the NFS server
echo "/srv/bitrix-shared 10.0.0.0/24(rw,sync,no_root_squash,no_subtree_check)" >> /etc/exports
exportfs -ra
# On each web node
mount -t nfs nfs-server:/srv/bitrix-shared /var/www/bitrix/upload
Entry in /etc/fstab:
nfs-server:/srv/bitrix-shared /var/www/bitrix/upload nfs rw,sync,hard,intr,timeo=30 0 0
GlusterFS — for production: replicates between nodes, no single point of failure:
gluster volume create bitrix-shared replica 2 \
node1:/data/gluster node2:/data/gluster
gluster volume start bitrix-shared
S3-compatible storage — for cloud environments. Bitrix supports S3 for file storage via the Bitrix\Main\File\Remote\S3 module. Configured in /bitrix/.settings.php under the file_storage section.
MySQL: Read Replication
With multiple web nodes, database load increases proportionally. Configure a read replica:
// /bitrix/.settings.php — connections section
'connections' => [
'value' => [
'default' => [
'className' => '\\Bitrix\\Main\\DB\\MysqlConnection',
'host' => 'mysql-master',
'database' => 'bitrix',
'login' => 'bitrix',
'password' => 'secret',
],
'slave' => [
'className' => '\\Bitrix\\Main\\DB\\MysqlConnection',
'host' => 'mysql-slave',
'database' => 'bitrix',
'login' => 'bitrix_ro',
'password' => 'secret_ro',
],
],
],
Route SELECT queries to the slave via a custom Connection Resolver, or use ProxySQL for transparent routing at the driver level.
Load Balancer: nginx Upstream
upstream bitrix_backend {
least_conn; # balance by fewest active connections
server web-node-1:80 weight=1 max_fails=3 fail_timeout=30s;
server web-node-2:80 weight=1 max_fails=3 fail_timeout=30s;
server web-node-3:80 weight=1 max_fails=3 fail_timeout=30s;
keepalive 32;
}
server {
listen 443 ssl;
server_name example.com;
location / {
proxy_pass http://bitrix_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_connect_timeout 5s;
proxy_read_timeout 60s;
}
}
Sticky Sessions: Are They Needed?
With properly configured Redis sessions, sticky sessions are not needed. Any node can serve any request. The exception is file uploads via Bitrix's uploader.php: if a file is uploaded in chunks, all chunks must reach the same node. The solution is sticky sessions for PUT/POST requests to /upload/, or using a dedicated upload endpoint.
Scope of Work
- Architecture audit, zero-downtime migration plan
- Redis configuration: sessions + Bitrix cache
- NFS/GlusterFS setup for shared file storage
- nginx upstream configuration and health checks
- MySQL Master-Slave + ProxySQL or native Bitrix slave connection
- Codebase synchronization between nodes (rsync/git/Ansible)
- Testing with one node taken offline
Timeline: basic 2-node cluster with Redis and NFS — 2–3 weeks. Production-ready setup with GlusterFS, HA Redis, and monitoring — 4–6 weeks.







