1C-Bitrix Website Performance Profiling
A slow website with no obvious cause is a familiar complaint: "the site got slower, the hosting didn't change, and no code was touched." The first instinct is to add server resources or enable caching. That rarely helps without understanding exactly where time is being lost. Profiling is the process of measuring, not guessing.
Profiling Levels
Performance degradation in a Bitrix project happens at several levels simultaneously, each requiring its own tooling:
| Level | Tools | What it measures |
|---|---|---|
| Browser / network | Lighthouse, WebPageTest, Chrome DevTools | FCP, LCP, TBT, request waterfall |
| PHP application | Bitrix Performance, XHProf, Blackfire | Function execution time, call graph |
| Database | MySQL slow query log, EXPLAIN, Percona PMM |
Slow queries, execution plans |
| Server | htop, iostat, perf, Netdata |
CPU, I/O, memory, context switches |
Always start at the browser level — it reveals the symptom. Then drill down to PHP, then to the database.
Bitrix Built-in Profiling
The quickest start is enabling the debugger directly in the admin panel:
// Enable in /bitrix/admin/settings.php → Performance
// or programmatically:
define('BX_DEBUG', true);
define('SHOW_SQLS', true); // displays all SQL queries
A panel appears at the bottom of the page showing:
- page generation time
- number of SQL queries and total execution time
- number of included files
- cache statistics (hit/miss)
What to look for: if SQL query count exceeds 100 or total SQL time exceeds 1 s — the problem is in the database. If there are few queries but generation time is high — the problem is in PHP code (heavy computations, slow external API calls).
XHProf: PHP Call Graph
For detailed PHP profiling, install XHProf or its fork Tideways:
# Ubuntu/Debian
apt install php8.1-xhprof
# php.ini or 90-xhprof.ini
extension=xhprof.so
xhprof.output_dir=/var/tmp/xhprof
Trigger profiling from local/php_interface/init.php:
if (isset($_GET['__profile']) && $_SERVER['REMOTE_ADDR'] === '127.0.0.1') {
xhprof_enable(XHPROF_FLAGS_CPU | XHPROF_FLAGS_MEMORY);
register_shutdown_function(function () {
$data = xhprof_disable();
$dir = '/var/tmp/xhprof';
$run = uniqid();
file_put_contents("$dir/$run.xhprof", serialize($data));
// Link to the report
error_log("XHProf: /xhprof/index.php?run=$run&source=bitrix");
});
}
Append ?__profile=1 to the URL (from localhost only). The result is a call graph with per-function timing. Look for functions with high inclusive time (total time including all callees) — these are the optimization candidates.
Blackfire: Production-Grade Profiling
Blackfire.io is a commercial profiler that works without code changes and is safe to use in production:
# Install agent
curl -sSL https://packages.blackfire.io/gpg.key | apt-key add -
apt install blackfire-agent blackfire-php
# Profile a URL
blackfire curl https://example.com/catalog/
Blackfire builds an aggregated call graph over 10 requests (eliminating noise), highlights hot paths, and provides recommendations. It is especially useful for Bitrix D7 projects — the call chains through \Bitrix\Main\ORM are clearly visible.
Profiling Bitrix Components
Sometimes the bottleneck is a specific component. Measure its execution time:
$start = microtime(true);
$APPLICATION->IncludeComponent('bitrix:catalog.section', '.default', [
'IBLOCK_ID' => 5,
'CACHE_TYPE' => 'A',
'CACHE_TIME' => 3600,
// ...
]);
$elapsed = round((microtime(true) - $start) * 1000, 2);
// Log if slower than the threshold
if ($elapsed > 200) {
\Bitrix\Main\Diag\Debug::writeToFile(
"catalog.section: {$elapsed}ms",
'SLOW_COMPONENT',
'/local/logs/performance.log'
);
}
\Bitrix\Main\Diag\Debug::writeToFile() is the native Bitrix logging method. It does not use error_log and does not pollute system logs.
Cache Efficiency Analysis
Bitrix uses several cache layers: file-based (/bitrix/cache/), memcached/Redis, and HTML page cache. To assess efficiency:
// Cache statistics via API
$cacheManager = \Bitrix\Main\Application::getInstance()->getManagedCache();
// For memcached: inspect stats via telnet
// telnet localhost 11211
// stats
// Hits, misses, evictions — key metrics
If get_misses is close to get_hits — the cache is barely effective. Common causes: TTL too short, frequent invalidation on data updates, cache keyed by $_SERVER['REQUEST_URI'] without considering cookies.
Case Study: FMCG Distributor
A B2B portal on Bitrix Business edition, a catalog of 85,000 items, and personalized prices per contractor group. Complaint: catalog page takes 8–12 s to load.
Diagnosis via Bitrix debugger: 340 SQL queries, total time 4.8 s. XHProf pointed to CPrice::GetBasePrice() called 340 times — the price was fetched per item individually instead of a batch query.
Fix: rewrite price fetching using CCatalogProductPrice::GetList() with an array ID filter:
$priceResult = \Bitrix\Catalog\PriceTable::getList([
'filter' => ['PRODUCT_ID' => $productIds, 'CATALOG_GROUP_ID' => $userGroupId],
'select' => ['PRODUCT_ID', 'PRICE', 'CURRENCY'],
]);
$prices = [];
while ($price = $priceResult->fetch()) {
$prices[$price['PRODUCT_ID']] = $price;
}
Result: 340 queries → 1 query for prices, generation time: 9 s → 0.8 s. HTML component cache was also enabled with tagged invalidation via CACHE_GROUPS.
Continuous Performance Monitoring
One-off profiling provides no protection against regressions. Set up a baseline:
- New Relic APM or Datadog — PHP agent, real-time metrics
-
Bitrix Monitor (the
monitormodule) — built-in monitoring with history - Alerts to Telegram/Email when thresholds are exceeded (response time > 2 s, 500 errors > N per minute)
Timelines
| Scope | Deliverables | Duration |
|---|---|---|
| Audit | Diagnostics, prioritized report | 2–3 days |
| Optimization of identified issues | Depends on bottleneck complexity | 5–20 days |
| Monitoring setup | Tools + alerts + baseline | 2–4 days |







