Scaling Configuration for 1C-Bitrix
Scaling Configuration for 1C-Bitrix
"We need scaling" — a request that most often means one of three things: the site is slow under peak load, significant traffic growth is planned, or high availability is required. These are different problems with different solutions. Horizontal scaling of 1C-Bitrix is not about adding another server — it is about re-architecting the system with a separation of component responsibilities.
Decomposition: What Scales Independently
| Component | Scaling approach | Complexity |
|---|---|---|
| PHP application | Horizontal (multiple nodes) | Medium |
| MySQL | Vertical + read replicas | Medium |
| Elasticsearch | Horizontal (shards/nodes) | High |
| Memcached/Redis | Horizontal (pool) | Low |
| File storage | NFS / S3-compatible | Medium |
| Static assets (CDN) | CDN offload | Low |
Start with the component that is the actual bottleneck — not the one that "seems right". It often turns out that MySQL is fine and the slowdown is in the PHP code.
Vertical vs Horizontal
Vertical (more CPU/RAM) — fast, requires no code changes, but has a ceiling and a cost. Up to 32 GB RAM on a DB server, vertical scaling is often more cost-effective than horizontal.
Horizontal — more complex (requires stateless architecture, shared storage, cache coordination), but has no upper limit and provides high availability.
Preparing Code for Horizontal Scaling
Key issues to resolve before adding nodes:
1C-Bitrix file cache. By default, cache is stored in /bitrix/cache/ on the local filesystem. With two servers — two independent caches, invalidation only on one. Move to Memcached or Redis.
Local temporary files. Find them via grep:
grep -r "file_put_contents\|fopen\|tempnam" \
/var/www/bitrix/local/components/ \
/var/www/bitrix/local/modules/ | grep -v ".git"
Any access to local files from custom modules is a potential problem in a cluster.
Sessions. Move to Memcached/Redis (see the cluster configuration article).
Scaling via CDN
The fastest way to offload the application is to move static assets and images to a CDN. For 1C-Bitrix, configure via the cdn module or through nginx:
# Static assets with long TTL — cached by CDN
location ~* ^/upload/.*\.(jpg|webp|png|css|js)$ {
add_header Cache-Control "public, max-age=2592000";
add_header Vary Accept-Encoding;
# CDN picks up based on Cache-Control
}
In the CDN provider settings (Cloudflare, Bunny.net, VK Cloud CDN), specify your server as the origin. The CDN caches static assets on its edge nodes worldwide.
Result: requests for images and CSS/JS never reach your server at all — the CDN serves them from the nearest node to the user.
Scaling 1C Import
Importing large catalogs (100,000+ SKUs) is a resource-intensive task that must not run on production nodes. Dedicate a separate worker node:
[1C] ---> [Import Worker Node]
|
[DB Master]
|
[Web Nodes] (read-only during import)
On the worker: PHP memory_limit = 1G, max_execution_time = 600, a dedicated PHP-FPM pool with 2–3 workers. During import, switch the web nodes to read from the replica.
Auto-Scaling in the Cloud
For projects on Yandex Cloud, VK Cloud, or AWS, auto-scaling of web nodes is possible:
Instance Group / Auto Scaling Group:
- min_instances: 2
- max_instances: 10
- scale_up: CPU > 70% for 3 minutes
- scale_down: CPU < 30% for 10 minutes
- cooldown: 300s
Requirements: a server image with 1C-Bitrix pre-installed, configuration pulled from storage at instance startup, an Application Load Balancer that automatically registers new nodes.
Scaling Budget
Realistic figures for planning:
- CDN offload for static assets: 1–2 days of work, removes 40–60% of server load
- Moving cache to Memcached + 2 web nodes: 3–5 days, horizontal PHP scaling
- Full cluster (3 web + DB master/replica + shared storage): 8–15 days
- Cloud auto-scaling: 10–20 days (including DevOps infrastructure)







