Cluster Configuration for 1C-Bitrix
Cluster Configuration for 1C-Bitrix
A single server cannot scale indefinitely. Under peak loads (promotions, holidays) the site goes down or responds in 10+ seconds — vertical scaling hits cost and physical hardware limits. 1C-Bitrix supports horizontal scaling through its built-in web cluster mechanism: multiple application servers, a shared database with replication, and distributed cache.
1C-Bitrix Cluster Architecture
Standard scheme for high load:
[Load Balancer]
/ | \
[web-1] [web-2] [web-3]
| | |
[Shared Storage - NFS/GlusterFS]
|
[DB Master] ---> [DB Replica-1]
---> [DB Replica-2]
|
[Memcached / Redis Cluster]
[Elasticsearch Cluster]
All web nodes work with a single file storage, shared database, and shared cache. File uploads (images, price lists) go to a shared storage accessible by all nodes.
Requirements for a Clustered Project
Before migrating to a cluster, verify:
- No session data stored in
$_SESSIONwithout a shared session storage - No direct writes to the local filesystem (temp files in
/tmpon shared storage, cache in Memcached) - No hardcoded paths dependent on a specific server
- 1C-Bitrix cache files (
/bitrix/cache/) are either NFS-mounted or moved to Memcached
Configuring the Web Cluster Module
In the admin panel: Management → Performance → Cluster.
Activation via PHP:
\Bitrix\Main\Loader::includeModule('cluster');
// Register cluster nodes
$cluster = new \CCluster();
$cluster->Add([
'NAME' => 'web-02',
'HOST' => '10.0.0.12',
'PORT' => 80,
'STATUS' => 'ACTIVE',
]);
Shared Storage: NFS vs GlusterFS
NFS — simpler to configure, suitable for 2–3 nodes in the same datacenter:
# On the NFS server
apt install nfs-kernel-server
echo "/var/www/bitrix/upload 10.0.0.0/24(rw,sync,no_root_squash)" >> /etc/exports
exportfs -a
# On the web nodes
apt install nfs-common
mount -t nfs 10.0.0.20:/var/www/bitrix/upload /var/www/bitrix/upload
Mount only directories with user-generated content: upload/, cache/ (if not Redis), resize_cache/.
GlusterFS — a distributed filesystem with replication, no single point of failure. More complex to configure, but no SPOF when the NFS server goes down.
Distributed Cache
Without a shared cache, each web node has its own isolated file-based cache. After a product update, invalidation occurs only on one node — the others serve stale data.
// /bitrix/.settings.php — identical on all nodes
'cache' => [
'value' => [
'type' => 'memcache',
'memcache' => [
['host' => '10.0.0.30', 'port' => 11211],
['host' => '10.0.0.31', 'port' => 11211],
],
'sid' => 'bitrix_production',
],
],
Synchronizing Configuration Files
.settings.php, dbconn.php, and php_interface/ must be identical on all nodes. Use rsync via cron or Ansible:
# Master node syncs configs to the others
rsync -az /var/www/bitrix/bitrix/.settings.php web-02:/var/www/bitrix/bitrix/
rsync -az /var/www/bitrix/bitrix/.settings.php web-03:/var/www/bitrix/bitrix/
In production environments, configuration is stored in Git and deployed via CI/CD simultaneously to all nodes.
Timeline
Designing and deploying a 3-node web cluster with NFS storage, database replication, and Memcached takes 5–10 business days, depending on project complexity and the current state of the infrastructure.







