Developing Filtering with Automatic Product Count in 1C-Bitrix
The standard catalog.smart.filter component in 1C-Bitrix calculates product counts via a separate SQL query to b_iblock_element with a full recalculation every time a filter parameter changes. With a catalogue of 50,000+ SKUs this produces a delay of 800–1,500 ms per filter interaction — the user sees a frozen interface. The problem is compounded when the catalogue uses a trade catalogue (catalog module) linked to b_catalog_price with multiple properties stored in b_iblock_element_property.
How the Smart Filter Works and Where It Breaks
The bitrix:catalog.smart.filter component with the SHOW_PRODUCTS_COUNT parameter enabled executes an aggregating query of the form:
SELECT COUNT(DISTINCT BE.ID)
FROM b_iblock_element BE
INNER JOIN b_iblock_element_property BEP ON BE.ID = BEP.IBLOCK_ELEMENT_ID
WHERE BE.IBLOCK_ID = ? AND BE.ACTIVE = 'Y' AND BEP.IBLOCK_PROPERTY_ID = ? AND BEP.VALUE = ?
With ten simultaneously selected filter properties this becomes a chain of JOINs or subqueries that MySQL executes without using composite indexes. EXPLAIN shows type ALL or index instead of ref — a full table scan.
The second problem is cache invalidation. The standard cache tag bitrix:catalog is flushed on any change to any element of the infoblock, including stock quantity updates. Stores with frequent warehouse updates constantly get a cold-start filter.
Solution Architecture with Automatic Count
We move the counting from SQL aggregation into a denormalised counters table and build the AJAX mechanism around it.
Denormalisation structure:
A separate table catalog_filter_counts is created (or a HighLoad block if admin UI is required):
CREATE TABLE catalog_filter_counts (
iblock_id INT NOT NULL,
prop_id INT NOT NULL,
prop_value VARCHAR(255) NOT NULL,
section_id INT NOT NULL DEFAULT 0,
cnt INT NOT NULL DEFAULT 0,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
INDEX idx_filter (iblock_id, section_id, prop_id, prop_value)
);
Counters are recalculated via a Bitrix agent (CAgent) on a schedule — every 5–15 minutes, or via the OnAfterIBlockElementUpdate event handler for critical changes.
AJAX filter component:
Instead of the standard smart.filter, a custom component based on bitrix:main.ui.filter is connected. When a checkbox changes, it sends a request to a router component:
// component.php
$filterState = $this->request->getPost('filter_state');
$counts = CatalogFilterCountsTable::getList([
'filter' => [
'=IBLOCK_ID' => $ibId,
'=SECTION_ID' => $sectionId,
'@PROP_VALUE' => $filterState['values'],
],
'select' => ['PROP_ID', 'PROP_VALUE', 'CNT'],
])->fetchAll();
The response is returned as a JSON object; the frontend updates counters in the DOM without a page reload.
Caching Counts
The denormalised table is already a cache in itself. But to reduce database load under heavy traffic, a second layer is added via Bitrix\Main\Data\Cache with a tag bound to the specific infoblock and section:
$cache = Cache::createInstance();
$cacheId = 'filter_counts_' . $ibId . '_' . $sectionId;
if ($cache->initCache(3600, $cacheId, '/catalog/filter/')) {
$counts = $cache->getVars();
} else {
$cache->startDataCache();
$counts = /* DB query */;
$cache->endDataCache($counts);
}
Invalidation occurs only on actual assortment changes, not on every price or stock update.
Case Study: Building Materials Online Store
Client — a store with a catalogue of 80,000 SKUs, 12 filter properties (brand, size, colour, material, etc.), and 1C integration via the d7 exchange module. The standard smart.filter with SHOW_PRODUCTS_COUNT = Y delivered an average response time of 2.3 s on category pages. After each 1C exchange (every 30 minutes) the cache was flushed, and for the first 5 minutes the site ran under load with no cache.
What was done:
- Disabled the standard
SHOW_PRODUCTS_COUNT - Implemented a denormalised counters table with recalculation via an agent every 10 minutes
- Developed an AJAX component based on
bitrix:catalog.section+ custombitrix:main.ui.filter - Added server-side counter cache with TTL 600 s, invalidated only on assortment changes (not prices and stock)
Result: filter response time dropped to 80–120 ms, MySQL load during peak hours fell by half. The 1C exchange stopped affecting filter performance.
Integration with Trade Catalogue and Multiple Prices
A separate case: catalogues with multiple price types (b_catalog_price) and filtering by price range. The standard filter adds a JOIN to b_catalog_price with extra conditions on CATALOG_GROUP_ID. Here we implement a separate price range counter with quantisation (10 buckets by range) — this allows building a price slider without running MIN/MAX aggregation on every request.
Timeline and Phases
Developing a filter with automatic count includes auditing the current infoblock structure and properties, designing the denormalisation schema, developing the recalculation agent and AJAX component, configuring caching, and load testing.
| Catalogue Scale | Filter Complexity | Development Time |
|---|---|---|
| up to 20,000 SKUs | up to 8 properties | 3–5 days |
| 20,000–100,000 SKUs | up to 15 properties | 5–10 days |
| 100,000+ SKUs / HighLoad | any | 10–20 days |
Load testing is carried out using Apache Benchmark or wrk against a test copy of the production database — without it the result is unpredictable.







