Configuring an Elasticsearch Cluster for 1C-Bitrix
Configuring an Elasticsearch Cluster for 1C-Bitrix
A single-node Elasticsearch instance is a single point of failure. When the service restarts, search on the site goes down, search queries return errors, and conversion drops. On high-load projects with 500+ concurrent users, a single node also cannot handle the load: 1C indexing runs in parallel with search queries, and they compete for resources. A three-node cluster solves both problems.
Minimum Fault-Tolerant Configuration
Three nodes are the minimum for quorum: when one node goes down, the remaining two form a majority and continue operating. Two nodes cause a split-brain on network partition — each considers itself the master.
Recommended role assignment:
| Node | Role | Memory | Purpose |
|---|---|---|---|
| es-01 | master, data | 16 GB | Master + data |
| es-02 | master, data | 16 GB | Standby master + data |
| es-03 | data, ingest | 16 GB | Data + preprocessing |
For large installations (>50 million documents), dedicated master-only nodes are separated from data nodes — they do not participate in search and indexing, only managing the cluster.
elasticsearch.yml Configuration
Configure elasticsearch.yml on each node:
# es-01
cluster.name: bitrix-search
node.name: es-01
node.roles: [ master, data ]
network.host: 10.0.0.11
http.port: 9200
transport.port: 9300
discovery.seed_hosts:
- 10.0.0.11:9300
- 10.0.0.12:9300
- 10.0.0.13:9300
cluster.initial_master_nodes:
- es-01
- es-02
# Security
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
# Performance
indices.memory.index_buffer_size: 20%
thread_pool.write.queue_size: 500
cluster.initial_master_nodes is specified only during initial deployment. After the cluster is formed, remove this line — otherwise a cluster split is possible on restart.
Configuring Sharding for Bitrix Indexes
By default, Elasticsearch creates 1 primary shard per index. For a catalog with 1+ million documents, more are needed:
PUT /bitrix_catalog
{
"settings": {
"number_of_shards": 3,
"number_of_replicas": 1,
"refresh_interval": "5s"
}
}
number_of_replicas: 1 means each shard is copied to a second node. When one node goes down, replicas are automatically promoted to primary, and search continues without interruption.
refresh_interval: 5s instead of the default 1s reduces indexing load during bulk updates from 1C. New documents will appear in search with up to a 5-second delay — acceptable for most catalogs.
Load Balancing Requests from Bitrix
Bitrix connects to Elasticsearch through a single host. To distribute requests across all nodes, place a load balancer in front of the cluster:
Option 1 — nginx upstream:
upstream elasticsearch {
least_conn;
server 10.0.0.11:9200;
server 10.0.0.12:9200;
server 10.0.0.13:9200;
}
server {
listen 9201;
location / {
proxy_pass http://elasticsearch;
}
}
Bitrix connects to localhost:9201. Nginx distributes requests using the least-connections method.
Option 2 — coordinating node (for loads of 1000+ rps): a dedicated node with node.roles: [] accepts all HTTP requests, distributes sub-requests to data nodes, and aggregates results. It stores no data and does not participate in master elections.
Monitoring Cluster Health
# Cluster status (green/yellow/red)
curl -s http://10.0.0.11:9200/_cluster/health?pretty
# Shard distribution across nodes
curl -s http://10.0.0.11:9200/_cat/shards?v
# Node load
curl -s http://10.0.0.11:9200/_cat/nodes?v&h=name,heap.percent,cpu,load_1m
yellow status — some replicas are not assigned (this is normal with a single node). red status — primary shards are lost, some data is unavailable, requires immediate intervention.
Timeline
Deploying a three-node cluster with security configuration, load balancer, and monitoring — 2–4 days depending on the availability of existing infrastructure.







