Setting up logging for the auto-populate process for 1С-Bitrix
The parser works in the background — via cron or agents. When something goes wrong, the only way to figure it out is logs. Without structured logging, debugging a parser becomes guesswork: did the source return an empty page, did XPath break, or did PHP memory limit hit at the 50,000th product? Let's organize logging so any incident is resolved in minutes.
Logging levels
Use standard PSR-3 levels, even if you don't use Monolog:
- DEBUG — each HTTP request to source, response time, body size. Enable only during debugging.
- INFO — parser start/stop, number of processed elements, count of created/updated records in info block.
- WARNING — skipped element (failed validation), slow source response (>5 sec), request retry.
-
ERROR — parsing exception, error writing to
b_iblock_element, invalid API response.
In production, keep INFO level. Switch to DEBUG via b_option config or flag file /local/parser_debug.flag, without restart or deploy.
Where to write logs
Option 1: File system. Simplest. Write to /local/logs/parser/YYYY-MM-DD.log. Line format:
[2024-03-15 14:23:01] INFO | source=competitor_a | action=update | iblock_id=12 | element_id=45678 | duration=0.34s
Each line — one event. Pipe | delimiter is convenient for grep and awk. Mandatory fields: timestamp, level, source (parsing source identifier), action (fetch/parse/validate/import).
Rotation — via logrotate or own agent deleting files older than 30 days. Without rotation, DEBUG-level logs grow gigabytes in a week.
Option 2: b_event_log table. Standard Bitrix journal. Call CEventLog::Add() with parameters:
CEventLog::Add([
'SEVERITY' => 'INFO',
'AUDIT_TYPE_ID' => 'PARSER_IMPORT',
'MODULE_ID' => 'iblock',
'ITEM_ID' => $elementId,
'DESCRIPTION' => json_encode($context, JSON_UNESCAPED_UNICODE),
]);
Pros — view via admin, filtering, access for managers without SSH. Cons — b_event_log not designed for thousands of records per minute, slows with intensive parsing. Use for WARNING/ERROR, not DEBUG.
Option 3: Custom table. Create parser_log table with fields id, created_at, level, source, action, element_id, message, context (JSON). Index on (created_at, level, source). This is optimal for projects where parser is critical subsystem and need analytical log queries.
Context — main part of log
Line "parsing error" is useless. Useful line: "XPath //div[@class="price"]/span returned 0 nodes, expected 1, URL: https://source.com/product/123, HTTP 200, body size: 45KB".
Minimal context for each level:
| Level | Required context |
|---|---|
| DEBUG | URL, HTTP code, response time, body size, User-Agent |
| INFO | Source, action, info block element ID, result (created/updated/skipped) |
| WARNING | Source, URL, skip reason, field value, expected format |
| ERROR | All above plus stack trace, memory_get_peak_usage(), $arFields content |
Wrapper implementation
Create ParserLogger class in /local/php_interface/classes/ (or module namespace). Interface:
ParserLogger::info('import', [
'source' => 'competitor_a',
'element_id' => 45678,
'action' => 'update',
'fields_changed' => ['PRICE', 'QUANTITY'],
]);
Inside — write to file + to b_event_log for WARNING and above levels. Level switching via COption::GetOptionString('parser', 'log_level', 'INFO').
Monitoring based on logs
Logs themselves don't help if nobody reads them. Add agent, run every 15 minutes, counting ERROR records over period. If threshold exceeded — notification (mail event or Telegram). This turns logging from passive tool into active monitoring system.
Setup summary for one day
-
ParserLoggerclass with DEBUG/INFO/WARNING/ERROR levels. - File logs with rotation in
/local/logs/parser/. - Duplicate WARNING+ to
b_event_log. - Level switching via admin without deploy.
- Error monitoring agent in logs.







