Monitoring RAM consumption in 1C-Bitrix

Our company is engaged in the development, support and maintenance of Bitrix and Bitrix24 solutions of any complexity. From simple one-page sites to complex online stores, CRM systems with 1C and telephony integration. The experience of developers is confirmed by certificates from the vendor.
Our competencies:
Development stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1175
  • image_bitrix-bitrix-24-1c_fixper_448_0.png
    Website development for FIXPER company
    811
  • image_bitrix-bitrix-24-1c_development_of_an_online_appointment_booking_widget_for_a_medical_center_594_0.webp
    Development based on Bitrix, Bitrix24, 1C for the company Development of an Online Appointment Booking Widget for a Medical Center
    564
  • image_bitrix-bitrix-24-1c_mirsanbel_458_0.webp
    Development based on 1C Enterprise for MIRSANBEL
    747
  • image_crm_dolbimby_434_0.webp
    Website development on CRM Bitrix24 for DOLBIMBY
    655
  • image_crm_technotorgcomplex_453_0.webp
    Development based on Bitrix24 for the company TECHNOTORGKOMPLEKS
    976

RAM Consumption Monitoring for 1C-Bitrix

PHP uses a process model: each request spawns a new PHP-FPM worker, and each worker consumes RAM independently. With pm.max_children = 50 and 64 MB per worker, that is already 3.2 GB for PHP alone. In practice, Bitrix requests involving heavy components (1C imports, price list generation, complex reports) consume 128–256 MB. Once the server starts swapping, response times degrade dramatically.

What Consumes Memory in Bitrix

The main drivers of worker RSS growth:

  • 1C imports via CommerceMLCCatalogImport loads the entire XML into memory. With files larger than 50 MB, a worker can easily reach 200–300 MB.
  • Iblock operationsCIBlockElement::GetList() without nTopCount returns the full result set at once.
  • In-process cache — when managed_cache (memcached) is enabled, a local copy of cached data is duplicated inside each PHP process.
  • Memory leaks in third-party modules — static properties and global arrays accumulating over the worker's lifetime when pm.max_requests = 0.

Monitoring Tools

System level — every few seconds:

# Total memory used by PHP-FPM processes
ps aux --sort=-%mem | grep php-fpm | awk '{sum += $6} END {print sum/1024 " MB"}'

# Per-worker breakdown
ps aux | grep php-fpm | grep -v grep | awk '{print $6/1024 " MB\t" $11}'

PHP memory_limit — in-code diagnostics:

// Shows the peak consumption for the lifetime of the request
$peak = memory_get_peak_usage(true);
if ($peak > 64 * 1024 * 1024) {  // > 64 MB
    \Bitrix\Main\Diag\Debug::writeToFile(
        sprintf('Peak memory: %.1f MB, URI: %s', $peak / 1048576, $_SERVER['REQUEST_URI']),
        'MEM_HIGH',
        '/local/logs/memory.log'
    );
}

Add this code to the OnEndBufferContent handler — that way it covers all requests with no instrumentation overhead.

Prometheus + node_exporter + Grafana:

# prometheus.yml scrape config
- job_name: node
  static_configs:
    - targets: ['localhost:9100']

The metric node_memory_MemAvailable_bytes represents available memory. Alert when it drops below 512 MB:

# alertmanager rule
- alert: LowMemory
  expr: node_memory_MemAvailable_bytes < 536870912
  for: 5m
  annotations:
    summary: "Low RAM on {{ $labels.instance }}"

PHP-FPM: Parameters to Control

Key settings in /etc/php/8.1/fpm/pool.d/bitrix.conf:

pm = dynamic
pm.max_children = 30         ; no more than (RAM - 1GB) / avg_worker_memory
pm.max_requests = 500        ; restart worker after 500 requests (guards against leaks)
pm.process_idle_timeout = 30s

; Status page for monitoring
pm.status_path = /php-fpm-status

pm.max_requests = 500 — the worker restarts after 500 requests, releasing any accumulated leaked memory. With clean code this is unnecessary, but third-party Bitrix modules sometimes accumulate static state.

Case Study: Site with Daily Import

An online store running a 1C import at 02:00 via cron_events. After the import, several PHP-FPM workers remained with RSS of 250–300 MB; the server (8 GB) started swapping, and the site was slow until 07:00 the following morning.

Diagnosis: max_requests was 0, so bloated workers never restarted. Setting pm.max_requests = 200 and lowering memory_limit = 256M (it had been 512M) resolved the issue: bloated workers died after 200 requests and memory was returned to the system.