Setting up load balancing for 1C-Bitrix

Our company is engaged in the development, support and maintenance of Bitrix and Bitrix24 solutions of any complexity. From simple one-page sites to complex online stores, CRM systems with 1C and telephony integration. The experience of developers is confirmed by certificates from the vendor.
Our competencies:
Development stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1173
  • image_bitrix-bitrix-24-1c_fixper_448_0.png
    Website development for FIXPER company
    811
  • image_bitrix-bitrix-24-1c_development_of_an_online_appointment_booking_widget_for_a_medical_center_594_0.webp
    Development based on Bitrix, Bitrix24, 1C for the company Development of an Online Appointment Booking Widget for a Medical Center
    564
  • image_bitrix-bitrix-24-1c_mirsanbel_458_0.webp
    Development based on 1C Enterprise for MIRSANBEL
    745
  • image_crm_dolbimby_434_0.webp
    Website development on CRM Bitrix24 for DOLBIMBY
    655
  • image_crm_technotorgcomplex_453_0.webp
    Development based on Bitrix24 for the company TECHNOTORGKOMPLEKS
    976

Load Balancing Configuration for 1C-Bitrix

Load Balancing Configuration for 1C-Bitrix

Two 1C-Bitrix servers without a load balancer are not a cluster — they are two independent websites. You need a single entry point that distributes requests, checks backend health, and removes unavailable nodes from rotation without manual intervention. For 1C-Bitrix, correct load balancer configuration includes special handling of file uploads, the admin section, and push-server WebSockets.

HAProxy vs nginx Upstream

HAProxy — a specialized load balancer operating at L4 and L7. Flexible routing, detailed statistics, health checks with custom HTTP probes. Preferred for production.

nginx upstream — simpler to configure, integrated with the rest of the nginx configuration. Sufficient for 2–3 nodes without complex routing logic.

HAProxy Configuration for 1C-Bitrix

# /etc/haproxy/haproxy.cfg

global
    maxconn 50000
    log /dev/log local0
    tune.ssl.default-dh-param 2048

defaults
    mode http
    timeout connect 5s
    timeout client 60s
    timeout server 60s
    option http-server-close
    option forwardfor
    log global

# Frontend: accept HTTPS
frontend bitrix_https
    bind *:443 ssl crt /etc/ssl/site.pem
    http-request set-header X-Forwarded-Proto https
    http-request set-header X-Real-IP %[src]

    # Admin section — route to a dedicated backend
    acl is_admin path_beg /bitrix/admin
    use_backend bitrix_admin if is_admin

    # Push server — separate backend with long-lived connections
    acl is_push path_beg /bitrix/pub
    use_backend bitrix_push if is_push

    default_backend bitrix_web

# Main backend — web nodes
backend bitrix_web
    balance leastconn
    option httpchk GET /bitrix/admin/cluster_check.php
    http-check expect status 200

    server web-01 10.0.0.11:80 check inter 5s rise 2 fall 3 weight 100
    server web-02 10.0.0.12:80 check inter 5s rise 2 fall 3 weight 100
    server web-03 10.0.0.13:80 check inter 5s rise 2 fall 3 weight 100

# Admin panel — master node only
backend bitrix_admin
    server web-01 10.0.0.11:80 check

# Push server
backend bitrix_push
    timeout server 3600s
    server push-01 10.0.0.14:8893 check

balance leastconn — requests are directed to the server with the fewest active connections. For 1C-Bitrix, roundrobin is preferable: requests are heterogeneous in execution time — a heavy import on one node should not affect distribution.

rise 2 fall 3 — a node is considered alive after 2 successful checks, dead after 3 failures. A balance between fast failure detection and false positives.

nginx Upstream as an Alternative

upstream bitrix_backends {
    least_conn;
    server 10.0.0.11:80 weight=1 max_fails=3 fail_timeout=30s;
    server 10.0.0.12:80 weight=1 max_fails=3 fail_timeout=30s;
    keepalive 32;
}

server {
    listen 443 ssl;

    location / {
        proxy_pass http://bitrix_backends;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # For file uploads
        client_max_body_size 256m;
        proxy_read_timeout 120s;
    }
}

keepalive 32 — persistent connections between nginx and backends. Without keepalive, every request opens a new TCP connection to PHP-FPM — handshake overhead.

Proxying File Uploads

Uploading large files (price lists 100+ MB, video) through the load balancer requires configuration:

# On the load balancer
proxy_request_buffering off;  # do not buffer the request body in memory
proxy_max_temp_file_size 0;
client_max_body_size 512m;
proxy_read_timeout 600s;

Without proxy_request_buffering off, nginx buffers the entire uploaded file in memory before sending it to the backend — with a 512 MB file and 10 parallel uploads, that is 5 GB of RAM consumed just for buffers.

1C-Bitrix: Forwarding the Real Client IP

1C-Bitrix uses the user's IP for sessions and rate limiting. Without configuration, it sees the load balancer's IP. In php.ini or in the config:

// /bitrix/php_interface/init.php
if (!empty($_SERVER['HTTP_X_REAL_IP'])) {
    $_SERVER['REMOTE_ADDR'] = $_SERVER['HTTP_X_REAL_IP'];
}

HAProxy forwards the real IP via X-Forwarded-For, nginx via X-Real-IP. Synchronize the load balancer settings and init.php.