DevOps for 1C-Bitrix
rsync -avz to production on a Friday evening, restart php-fpm, and the site responds with 502 — because .settings.php still has local database connection settings. A classic "old school" Bitrix deploy. We build a full DevOps cycle for 1C-Bitrix projects: from Docker environments to Telegram alerts, so that deployment is a routine, not a lottery.
Problems That DevOps Solves on Bitrix Projects
Bitrix historically lived in a "edit files via FTP on the live server" paradigm. In 2026, dozens of teams still work this way, and here's what it leads to:
- Two developers edit
init.phpsimultaneously — one overwrites the other's changes. Theinit.phpfile in Bitrix is the entry point for event handlers, custom functions, and module connections. Losing edits here means losing business logic - Updating a module through the admin panel breaks a custom component template, because nobody tracked which files were changed in
/local/templates/ - Site outages are discovered from a client, not from monitoring
- There's no staging environment — "let's quickly test on production," and
bitrix:catalog.sectiondisappears
CI/CD: From Commit to Production, Hands-Free
Git — we migrate the project from FTP to Git (GitLab, GitHub, Bitbucket). Branch structure: main (production), staging, develop, feature branches. .gitignore for Bitrix — a non-trivial matter:
/bitrix/cache/
/bitrix/managed_cache/
/bitrix/stack_cache/
/upload/
/bitrix/php_interface/dbconn.php
/bitrix/.settings.php
/bitrix/license_key.php
Miss managed_cache/ — the repository bloats by gigabytes. Forget to exclude license_key.php — the key leaks.
CI pipeline checks code automatically:
- PHPStan level 5+ for static analysis — catches calls to non-existent
CIBlockElementmethods before they reach the server - PHP_CodeSniffer with the Bitrix standard
- PHPUnit for business logic, Jest for frontend
-
composer audit— dependency vulnerability checks - Frontend build (webpack/Vite)
CD pipeline deploys automatically:
- Merge into
staging— deploy to staging - Merge into
main— deploy to production (with manual confirmation or automatically) - Zero-downtime via symlink strategy: new version in a separate directory,
current→ symlink switch in milliseconds.upload/lives outside release directories - Automatic rollback on errors — if the healthcheck after deploy doesn't return 200, the symlink rolls back
Tools: GitLab CI/CD, GitHub Actions, Deployer (PHP). Deployer is especially convenient for Bitrix — it has recipes for symlink deployment and shared directories.
Docker for Bitrix
Docker solves "it works on my machine" once and for all.
Local development — docker-compose.yml:
- nginx + php-fpm 8.1/8.2 + MySQL 8.0 (or MariaDB 10.6) + Redis + Memcached
- Configuration as close to production as possible — same versions, same PHP modules
- New developer:
git clone+docker-compose up -d— writing code within 5 minutes - Parallel work on projects with different PHP versions — via separate compose files
Bitrix specifics in Docker — here be dragons:
-
/upload/is mounted as a named volume. Not bind mount — otherwise on Windows/Mac there will be permission and performance issues - Bitrix cron tasks (
/bitrix/modules/main/tools/cron_events.php) — via a separate container with the same image or supervisord inside the container - The "Proactive Protection" module (
security) blocks requests if it detects a reverse proxy. ProperREMOTE_ADDRis needed viaset_real_ip_fromin nginx andrealip_module -
bitrix/php_interface/dbconn.phpand.settings.php— via environment variables or a separate.envfile, not through a volume with production configs
Production:
- Multi-stage build: build stage for assets, production stage with a minimal image
- Docker Registry for versioned images
- Orchestration via Docker Swarm or Kubernetes for large projects
Configuring nginx and php-fpm
The difference between "the site is slow" and "200ms TTFB" is in the configuration.
nginx:
- Location blocks for Bitrix:
urlrewrite.phphandles clean URLs,/bitrix/admin/is restricted by IP viaallow/deny -
expires 30dfor static files — CSS, JS, images served from browser cache - Brotli (15–20% better than gzip for text):
brotli on; brotli_comp_level 6; - Rate limiting on
/bitrix/tools/— protection against brute force and basic DDoS - HTTP/2 push for critical resources
php-fpm:
-
pm = dynamic, calculatingpm.max_childrenby formula:(RAM - RAM_of_other_services) / avg_memory_per_process. For Bitrix, the average is typically 40–80 MB - OPcache:
opcache.memory_consumption=256(the default 128 MB isn't enough — Bitrix loads thousands of files),opcache.max_accelerated_files=20000,opcache.validate_timestamps=0in production (reset viacachetool opcache:reseton deploy) -
php.ini:memory_limit=256M(for heavy import operations — up to 512M),max_execution_time=60,upload_max_filesize=100M -
slowlogwithrequest_slowlog_timeout=5s— finding bottlenecks before users do
Monitoring and Logging
Infrastructure:
- Prometheus + Grafana: CPU, RAM, disk, network, service health
- Alerts: CPU > 80% for 5 minutes, free RAM < 500 MB, disk > 85%, php-fpm queue > 0 (a queue means there aren't enough workers)
- Node Exporter, MySQL Exporter, PHP-FPM Exporter — metrics collected from every component
Application:
- Uptime check every 60 seconds — if the site goes down, a Telegram alert within a minute
- Response time for key URLs:
/,/catalog/,/personal/order/make/ - Sentry for PHP errors — not
tail /var/log/php-errors.log, but structured errors with context - Monitoring Bitrix agents (
b_agentin the database) — a stuck agent can silently break 1C data exchange for hours. We checkNEXT_EXEC < NOW() - INTERVAL 1 HOUR
Logging:
- ELK Stack or Loki + Grafana — nginx access/error, php-fpm slow log, MySQL slow query log, Bitrix errors — all in one place
- Rotation via logrotate — without it, within six months
access.logeats up 50 GB
Staging Environment
- Identical to production: same versions of nginx, PHP, MySQL, same modules, same
php.inisettings - Automatic update on merge into the
stagingbranch - Periodic database clone from production. Personal data anonymization —
UPDATE b_user SET EMAIL = CONCAT('user', ID, '@test.local'), PHONE = ''— data privacy compliance - HTTP Basic Auth or IP filtering. Robots.txt with
Disallow: / - Sandbox mode for payment gateways to test payments
Ansible: Infrastructure as Code
Need a new server? ansible-playbook site.yml -l production — configured identically to the current one in 15 minutes:
- Playbooks: nginx, php-fpm, MySQL, Redis, certbot, firewall
- Reusable roles:
common(users, SSH, NTP),web(nginx + php-fpm),db(MySQL + backup),monitoring(Prometheus + exporters) - Idempotency — re-running doesn't break anything
- Inventory:
[production],[staging],[development]— server group management
Backup
| Component | Frequency | Retention | Method |
|---|---|---|---|
| MySQL database | Every 6 hours | 30 days | mysqldump --single-transaction + gzip |
| Files (upload/) | Daily | 14 days | Incremental rsync |
| Full backup | Weekly | 60 days | tar + gpg encryption |
| Server configs | On change | In Git | Ansible playbooks |
- Geographic distribution — S3-compatible storage + separate server in another data center
- Monthly test restore. A backup you've never restored from isn't a backup — it's an illusion of safety
- Cron with notifications: if a backup fails — immediate alert
Typical Implementation Timeline
| Task | Timeline |
|---|---|
| Docker environment for local development | 2–3 days |
| CI/CD pipeline (GitLab CI / GitHub Actions) | 1–2 weeks |
| Staging environment | 3–5 days |
| Monitoring + alerting (Prometheus + Grafana) | 1–2 weeks |
| Centralized logging (ELK/Loki) | 1–2 weeks |
| Ansible server automation | 2–3 weeks |
| Comprehensive DevOps implementation | 4–8 weeks |
DevOps isn't a project with an end date — it's a transition from "uploaded via FTP and praying" to predictable processes. Every deploy is routine, every incident is an alert with context, every new developer gets docker-compose up instead of three days setting up their environment.







