Optimizing queries to 1C-Bitrix highload blocks

Our company is engaged in the development, support and maintenance of Bitrix and Bitrix24 solutions of any complexity. From simple one-page sites to complex online stores, CRM systems with 1C and telephony integration. The experience of developers is confirmed by certificates from the vendor.
Our competencies:
Development stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1175
  • image_bitrix-bitrix-24-1c_fixper_448_0.png
    Website development for FIXPER company
    811
  • image_bitrix-bitrix-24-1c_development_of_an_online_appointment_booking_widget_for_a_medical_center_594_0.webp
    Development based on Bitrix, Bitrix24, 1C for the company Development of an Online Appointment Booking Widget for a Medical Center
    564
  • image_bitrix-bitrix-24-1c_mirsanbel_458_0.webp
    Development based on 1C Enterprise for MIRSANBEL
    747
  • image_crm_dolbimby_434_0.webp
    Website development on CRM Bitrix24 for DOLBIMBY
    655
  • image_crm_technotorgcomplex_453_0.webp
    Development based on Bitrix24 for the company TECHNOTORGKOMPLEKS
    976

Query Optimization for 1C-Bitrix Highload Blocks

HighLoad blocks (the highloadblock module) were introduced in Bitrix as a response to the performance issues of information blocks when storing large volumes of non-hierarchical data. Each HL block is a separate MySQL table with an auto-generated name such as b_uts_catalog_props or a custom name configured in the settings. Unlike information blocks, HL blocks have no complex JOIN dependencies and should theoretically perform well. In practice, slow queries against HL blocks are just as common — caused by missing indexes, incorrect API usage, and poor data schema design.

HL Block Architecture and Growth Points

An HL block table contains columns ID and UF_* (user-defined fields). When a HL block is created through the Bitrix interface, no indexes are created automatically other than the primary key on ID. This means any filter on fields other than ID results in a full table scan (type = ALL in EXPLAIN).

Example of a problematic query:

$result = HighloadBlockTable::getList([
    'filter' => [
        '=UF_STATUS' => 'active',
        '>=UF_CREATED_AT' => new DateTime('-30 days'),
    ],
    'select' => ['ID', 'UF_NAME', 'UF_VALUE'],
    'order'  => ['UF_CREATED_AT' => 'DESC'],
    'limit'  => 50,
]);

Without an index on UF_STATUS and UF_CREATED_AT, this query scans the entire table. With 500,000 records — 300–800 ms per query.

Indexes for HL Blocks

Bitrix provides no UI for creating HL block indexes. Indexes must be created directly via SQL or through migrations. For the example above:

ALTER TABLE b_uts_my_highload
  ADD INDEX idx_status_created (UF_STATUS, UF_CREATED_AT);

Rules for choosing indexes:

  • Fields in filter are candidates for indexing, but order matters: a field with lower cardinality (e.g., UF_STATUS with 3 values) is placed first only if it is used independently in queries; otherwise the composite index starts with the high-cardinality field
  • Fields in order must be in the index or a covering index
  • Replace SELECT * with an explicit select listing only required fields — this reduces data volume and sometimes enables a covering index

Caching HL Query Results

HL blocks do not have automatic tag-based caching like information blocks. When using HighloadBlockTable::getList(), caching must be added explicitly via ManagedCache:

$cache = \Bitrix\Main\Application::getInstance()->getManagedCache();
$cacheKey = 'hl_my_block_active_' . md5(serialize($filter));

if (!$cache->read(600, $cacheKey, 'hl_my_block')) {
    $result = MyHLTable::getList(['filter' => $filter, 'select' => $select]);
    $data = $result->fetchAll();
    $cache->set($cacheKey, $data);
} else {
    $data = $cache->get($cacheKey);
}

Cache invalidation by the tag hl_my_block is triggered in the OnAfterAdd / OnAfterUpdate handler of the HL block.

Pagination on Large Tables

Standard offset-based pagination degrades on large data sets: OFFSET 50000 LIMIT 50 forces MySQL to read and discard 50,000 records. For HL blocks with tens of thousands of records and infinite scroll or a large number of pages, use cursor-based pagination:

// Use the last ID instead of offset
$result = MyHLTable::getList([
    'filter' => ['=UF_STATUS' => 'active', '>ID' => $lastId],
    'order'  => ['ID' => 'ASC'],
    'limit'  => 50,
]);

This only works with monotonic sort order on an indexed field.

Case Study: Order Event Log in a HL Block

An online store used a HL block as an order event log: 3.2 million records, 8 fields. Querying the history of a specific order (filter on UF_ORDER_ID) without an index — 1.2 s. Adding an index on UF_ORDER_ID:

ALTER TABLE b_uts_order_log ADD INDEX idx_order_id (UF_ORDER_ID);

Query time: 1.2 s → 2 ms. Additionally: summary statistics queries over a period (aggregation by UF_EVENT_TYPE for the month) were moved to a separate cached report updated nightly — this removed load from MySQL during business hours.

Denormalization and HL Blocks as Lookup Tables

HL blocks are often used as lookup tables (payment types, statuses, regions). In this case, the main queries against the information block perform a JOIN with the HL block to retrieve the text value. The correct solution is denormalization: store the text value directly in the information block field and update it when the lookup table changes via an agent. This eliminates the JOIN in 95% of read queries.

Optimization Stages

Work Duration
Query audit (slow query log, Bitrix panel) 0.5–1 day
Adding indexes 0.5 day
Caching refactoring 1–3 days
Schema refactoring (denormalization, cursor pagination) 2–5 days

Always start with EXPLAIN — without it there is no way to determine whether an index is needed and which one.