Setting up 1C-Bitrix database sharding

Our company is engaged in the development, support and maintenance of Bitrix and Bitrix24 solutions of any complexity. From simple one-page sites to complex online stores, CRM systems with 1C and telephony integration. The experience of developers is confirmed by certificates from the vendor.
Our competencies:
Development stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1189
  • image_bitrix-bitrix-24-1c_fixper_448_0.png
    Website development for FIXPER company
    813
  • image_bitrix-bitrix-24-1c_development_of_an_online_appointment_booking_widget_for_a_medical_center_594_0.webp
    Development based on Bitrix, Bitrix24, 1C for the company Development of an Online Appointment Booking Widget for a Medical Center
    564
  • image_bitrix-bitrix-24-1c_mirsanbel_458_0.webp
    Development based on 1C Enterprise for MIRSANBEL
    747
  • image_crm_dolbimby_434_0.webp
    Website development on CRM Bitrix24 for DOLBIMBY
    657
  • image_crm_technotorgcomplex_453_0.webp
    Development based on Bitrix24 for the company TECHNOTORGKOMPLEKS
    976

Setup of database sharding 1C-Bitrix

Partitioning splits table inside one server. Sharding — distributes data across multiple DB servers. When one MySQL server can't handle load (CPU at 100%, disk can't keep up), horizontal scaling via sharding is an option. For Bitrix this is non-trivial because core expects single database.

Built-in mechanism: cluster module

Bitrix ships cluster module (available on "Business" and "Enterprise" editions). Module supports:

  • Master-slave replication — write to master, read from one or multiple slaves. Not sharding in pure form, but unloads read traffic.
  • Move modules to separate DB — specific module (e.g., search or statistic) can use its own database on another server.

Setup master-slave via cluster:

  1. Configure MySQL/MariaDB replication via standard tools (GTID or position-based)
  2. In Bitrix admin: Settings → Web cluster → Databases → Add
  3. Specify slave server params: host, port, login, password
  4. Bitrix auto-directs SELECT to slave, INSERT/UPDATE/DELETE — to master

Module tracks replication lag (Seconds_Behind_Master) and on threshold exceeded switches reads back to master.

Module-level sharding

cluster module allows moving specific module tables to separate DB. Practice:

  • statistic moduleb_stat_* tables generate 80% INSERT load on typical site. Move to separate server unloads main DB.
  • search moduleb_search_* tables heavy on full-text search. Alternative — move search to Elasticsearch.
  • forum / blog modules — if active, can isolate.

Setup: Web cluster → Databases → [server] → Modules — select module to move. Bitrix redirects queries for that module's tables to specified server.

Horizontal data sharding

Full sharding — split one table by key (e.g., items with ID 1–100,000 on server A, 100,001–200,000 on server B) — Bitrix doesn't support out of the box. D7 ORM and old API (CIBlockElement::GetList) work with single DB connection.

Implementation possible but requires:

  • Proxy layer (ProxySQL, Vitess) — routes queries by sharding rules transparently to application
  • Custom DB class — inherit from Bitrix\Main\DB\MysqliConnection with routing logic
  • Limitations: JOINs across shards impossible, aggregate queries (COUNT, SUM) must be collected from multiple shards at app level

In practice full horizontal sharding for Bitrix rarely used. More common: master-slave + offload heavy modules + caching (Redis/Memcached).

Caching as sharding alternative

Before sharding, ensure caching is configured:

  • Memcached/Redis for Bitrix cache storage ('cache' => ['type' => 'redis'] in .settings.php)
  • Tagged cache — invalidation targeted, not full
  • Composite cache — HTML pages for anonymous users don't hit DB at all

On 90% of projects properly configured caching + master-slave eliminate sharding need.

What we configure

  • Install and setup cluster module
  • Configure MySQL/MariaDB master-slave replication
  • Move heavy modules (statistic, search) to separate DB server
  • Setup replication lag monitoring
  • Configure Redis/Memcached for caching
  • Load testing: query distribution between servers