SQL Query Optimization in 1C-Bitrix
A loaded Bitrix site generates 200 to 1000 SQL queries per page. Most are repetitive, redundant, or use SELECT * instead of specific fields. Before buying a more expensive server, it makes sense to figure out: which queries are slowing things down and why.
Finding Problem Queries
First step — enable Bitrix SQL tracker. In the performance panel (/bitrix/admin/perfmon_panel.php) switch to profiling mode. Or programmatically:
\Bitrix\Main\Diag\SqlTracker::getInstance()->start();
// ... your code ...
$tracker = \Bitrix\Main\Diag\SqlTracker::getInstance();
$tracker->stop();
foreach ($tracker->getQueries() as $query) {
echo $query->getSql() . ' — ' . $query->getTime() . 'ms' . PHP_EOL;
}
The tracker will show all queries for a request, their timing and call stacks. Look for:
- queries over 50 ms — they notably slow the page
- duplicates — same query 10–50 times per page
- queries without
WHEREon large tables (b_iblock_element,b_sale_order_props)
Second source — slow_query_log from MySQL/MariaDB. Enable via long_query_time = 0.5 in my.cnf. Shows real picture under load, not just manual testing.
Main Problem Classes
Queries without indexes. b_iblock_element contains 10,000 to 1,000,000 rows. Query SELECT * FROM b_iblock_element WHERE IBLOCK_ID = 5 AND ACTIVE = 'Y' ORDER BY SORT without index on (IBLOCK_ID, ACTIVE, SORT) does full table scan. Check: EXPLAIN SELECT ... — if type is ALL, no index.
Bitrix creates indexes on install, but not when adding custom fields via b_uts_iblock_N_single (UTS tables for custom types). Need to add these indexes manually.
Excessive field selection. CIBlockElement::GetList() by default joins multiple tables and returns dozens of fields including DETAIL_TEXT weighing megabytes. If page only needs ID, NAME and PREVIEW_PICTURE — specify $select explicitly:
CIBlockElement::GetList(
['SORT' => 'ASC'],
['IBLOCK_ID' => 5, 'ACTIVE' => 'Y'],
false,
['nPageSize' => 20],
['ID', 'NAME', 'PREVIEW_PICTURE'] // only needed fields
);
Without $select query includes LEFT JOIN b_iblock_element_iprop and other tables not needed.
N+1 problem. Classic: load 20 elements, then in a loop load properties for each via separate query. Result — 21 queries instead of 1–2. In old API solved via $arSelectFields including properties. In D7 — via fetchCollection() with fill() or pre-loading via select().
Repeated queries of same data. Site settings, user groups, infoblock sections — requested every page again. Standard Bitrix cache (BXCache) should cover this, but if cache disabled or tagged cache reset too often — queries go to database.
Working with Indexes
Adding missing indexes — fastest solution with most effect:
| Table | Recommended Index | When Needed |
|---|---|---|
b_iblock_element |
(IBLOCK_ID, ACTIVE, SORT) |
Almost always |
b_iblock_element_property |
(IBLOCK_PROPERTY_ID, VALUE) |
On property filtering |
b_sale_order |
(USER_ID, STATUS_ID, DATE_INSERT) |
Customer account |
b_sale_basket |
(ORDER_ID, FUSER_ID) |
Cart, checkout |
b_search_content_stem |
(PARAM2) |
Search on large catalog |
Index added via SQL or Bitrix ORM:
$connection = \Bitrix\Main\Application::getConnection();
$connection->queryExecute(
"ALTER TABLE b_iblock_element_property ADD INDEX ix_prop_val (IBLOCK_PROPERTY_ID, VALUE(64))"
);
Caution with VALUE(64): string fields indexed by prefix. For numeric values stored as VARCHAR, consider virtual column with type cast.
D7 ORM Optimization
D7 ORM allows precise query control. Compare:
// Bad: loads all fields, all properties
$result = \Bitrix\Iblock\ElementTable::getList([
'filter' => ['IBLOCK_ID' => 5, 'ACTIVE' => 'Y'],
]);
// Better: only needed fields, explicit limit
$result = \Bitrix\Iblock\ElementTable::getList([
'select' => ['ID', 'NAME', 'PREVIEW_PICTURE_ID'],
'filter' => ['IBLOCK_ID' => 5, 'ACTIVE' => 'Y'],
'order' => ['SORT' => 'ASC'],
'limit' => 20,
'offset' => 0,
'cache' => ['ttl' => 3600],
]);
The cache parameter in ORM query — built-in caching at query level. Invalidation — via tagged cache on element change.
Cache as Query Replacement
If data changes once per hour — no point hitting database every request. Use \Bitrix\Main\Data\Cache:
$cache = \Bitrix\Main\Data\Cache::createInstance();
$cacheId = 'catalog_top_' . $iblockId;
$cachePath = '/catalog/top/';
if ($cache->initCache(3600, $cacheId, $cachePath)) {
$data = $cache->getVars();
} elseif ($cache->startDataCache()) {
$tagCache = new \Bitrix\Main\Data\TaggedCache();
$tagCache->startTagCache($cachePath);
$data = /* your query */;
$tagCache->registerTag('iblock_id_' . $iblockId);
$tagCache->endTagCache();
$cache->endDataCache($data);
}
When any infoblock element changes, tag iblock_id_N auto-invalidates, and next request goes to database.
Work Timeline
| Task | Scope | Expected Effect |
|---|---|---|
| Profiling, identify top-10 queries | 1 day | Problem understanding |
| Add missing indexes | 1–2 days | 2–10x speedup on specific queries |
Optimize $select in components |
2–3 days | 20–40% load reduction |
| Eliminate N+1, add cache | 3–5 days | 3–5x query reduction |
| Complex: indexes + selection + cache + ORM | 1–2 weeks | Page from 500+ queries → 50–100 queries |
SQL optimization isn't one-time action. After changes needs regular monitoring: slow query log weekly, tracker on new component releases. Problems accumulate as catalog grows and data volume increases.







