Setting up web sockets for 1C-Bitrix

Our company is engaged in the development, support and maintenance of Bitrix and Bitrix24 solutions of any complexity. From simple one-page sites to complex online stores, CRM systems with 1C and telephony integration. The experience of developers is confirmed by certificates from the vendor.
Our competencies:
Development stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1177
  • image_bitrix-bitrix-24-1c_fixper_448_0.png
    Website development for FIXPER company
    811
  • image_bitrix-bitrix-24-1c_development_of_an_online_appointment_booking_widget_for_a_medical_center_594_0.webp
    Development based on Bitrix, Bitrix24, 1C for the company Development of an Online Appointment Booking Widget for a Medical Center
    564
  • image_bitrix-bitrix-24-1c_mirsanbel_458_0.webp
    Development based on 1C Enterprise for MIRSANBEL
    747
  • image_crm_dolbimby_434_0.webp
    Website development on CRM Bitrix24 for DOLBIMBY
    655
  • image_crm_technotorgcomplex_453_0.webp
    Development based on Bitrix24 for the company TECHNOTORGKOMPLEKS
    976

Setting up WebSockets for 1C-Bitrix

WebSocket is a protocol for persistent two-way connection between browser and server. Without WebSocket, Bitrix chat works via Long Polling: browser asks every 20 seconds "are there new messages?". With WebSocket the server sends the message instantly when it appears. For corporate portal, online chat and real-time notifications — fundamental difference.

How Bitrix uses WebSocket

The pull module (Push and Pull) automatically switches from Long Polling to WebSocket when Node.js push server is present. Client part — JS library BX.PullClient, which tries WebSocket first, on failure falls back to SSE, then Long Polling.

Transport determined in bitrix/js/pull/pull.js via BX.Pull.config variable. Force set transport for debugging:

BX.Pull.connect({
    serverEnabled: true,
    serverUrl: 'https://example.ru/bitrix/subws/',
    guestMode: false,
    userId: USER_ID,
    userHash: USER_HASH,
    transport: 'websocket' // force WebSocket
});

Nginx: correct WebSocket proxy configuration

WebSocket requires special handling in Nginx. Key — Upgrade and Connection headers:

map $http_upgrade $connection_upgrade {
    default upgrade;
    ''      close;
}

server {
    listen 443 ssl http2;
    server_name example.ru;

    # ... SSL settings ...

    # WebSocket endpoint for push-server
    location /bitrix/subws/ {
        proxy_pass http://127.0.0.1:9011;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Critically important: without this Nginx breaks connection every 60 sec
        proxy_read_timeout 3600s;
        proxy_send_timeout 3600s;

        # Don't buffer — data must go in real time
        proxy_buffering off;
    }
}

map $http_upgrade $connection_upgrade — correct handling of both WebSocket (Upgrade: websocket) and regular HTTP requests through single location.

Connection limits

Each WebSocket connection is an open file descriptor at OS level. With 2000 online users — 2000 descriptors just from WebSocket.

System limits:

# Current limit
ulimit -n

# Increase for current session
ulimit -n 65535

# Permanently via /etc/security/limits.conf
cat >> /etc/security/limits.conf << 'EOF'
www-data soft nofile 65535
www-data hard nofile 65535
nginx   soft nofile 65535
nginx   hard nofile 65535
EOF

# For systemd services
# In /etc/systemd/system/nginx.service.d/override.conf:
[Service]
LimitNOFILE=65535

Nginx worker_connections:

events {
    worker_connections 10240;
    use epoll;         # Linux — most efficient method
    multi_accept on;   # accept multiple connections per syscall
}

Node.js push-server: cluster mode

One Node.js process uses one CPU core. Under high load (1000+ connections) need cluster mode — multiple worker processes behind load balancer:

In config.json push-server:

{
  "cluster": {
    "workers": 4,
    "sticky": true
  }
}

sticky: true — "sticky" connections: one client always goes to one worker. Without this WebSocket connection can break when switching between workers.

For balancing multiple Node.js instances in Nginx:

upstream push_backend {
    ip_hash; # sticky sessions by IP
    server 127.0.0.1:9011;
    server 127.0.0.1:9012;
    server 127.0.0.1:9013;
    server 127.0.0.1:9014;
}

location /bitrix/subws/ {
    proxy_pass http://push_backend;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection $connection_upgrade;
    proxy_read_timeout 3600s;
    proxy_buffering off;
}

Debugging WebSocket connection

In Chrome DevTools → Network → filter WS — shows all WebSocket connections with frame data.

Via curl (handshake only):

curl -v \
  -H "Upgrade: websocket" \
  -H "Connection: Upgrade" \
  -H "Sec-WebSocket-Key: dGhlIHNhbXBsZSBub25jZQ==" \
  -H "Sec-WebSocket-Version: 13" \
  https://example.ru/bitrix/subws/
# Expected response: HTTP/1.1 101 Switching Protocols

Common issues

Connection breaks every 60 seconds. proxy_read_timeout in Nginx not set or equals default (60s). WebSocket keepalive doesn't have time.

WebSocket doesn't work behind Cloudflare. Cloudflare proxies WebSocket only on paid plans (Pro+). On free tier — use long polling or move WebSocket domain out of Cloudflare (DNS-only).

Error 400 Bad Request during handshake. Nginx doesn't pass Upgrade header to backend — check for proxy_http_version 1.1 and proxy_set_header Upgrade. HTTP/2 doesn't support WebSocket upgrade — use HTTP/1.1 only for WebSocket endpoint.