Testing Projects on 1C-Bitrix
CIBlockElement::GetList and Pagination — A Bug That Lives for Years
A classic case: a website with a catalog using pagination through the bitrix:catalog.section component. The client complains — products are duplicated on the third page. You clear the component cache — seems fine. The next day, it's back. Turns out a custom sort order conflicts with the PAGEN_1 parameter, and with a certain filter combination, CIBlockElement::GetList returns the same IDs. These things are only caught through testing — not code review, not "eyeballing it."
We build QA processes for 1C-Bitrix projects: manual functional testing, automated E2E, load testing, and acceptance testing.
Why Testing Is Critical on Bitrix
1C-Bitrix projects aren't landing pages. Under the hood — dozens of modules, integrations, and non-obvious dependencies:
-
Business logic chains — you fix a discount calculation in
sale.discount, and a promo code viasale.basket.discountstops applying. The discount module in Bitrix is one of the most fragile: priority rules, overlaps, loyalty programs. One change — a cascade of failures. -
1C integration — data exchange via
catalog.import.1cor REST. A mapping error in infoblock properties — and the website shows a product with no price or zero stock. Order desynchronization — lost sales. -
Core updates —
bitrix:mainupdated, but a custom component was using a deprecated methodCModule::IncludeModulewith non-standard parameters. Without regression testing — it's Russian roulette. -
Cross-browser issues —
bitrix:sale.order.ajaxrenders forms differently in Safari and Chrome. The "Place Order" button on iPhone can end up off-screen.
Functional Testing
We verify every business scenario. Not just "works or doesn't," but all edge cases.
Catalog (components catalog.section, catalog.element):
- Smart filter
catalog.smart.filter: all property combinations, reset, result counting. Especially — filters by trade offers (SKU), they break most often - Sorting + pagination — that same duplication bug
- Comparison via
catalog.compare.list— adding, removing, displaying differences - Quick view — modal window, adding to cart from the modal
Cart and checkout (sale.basket.basket, sale.order.ajax):
- Adding from catalog, from product page, quick order
- Discounts: by quantity, by total, by coupon, by loyalty program. Discount overlap — a separate test case, at least 8 combinations
- Delivery calculation: handlers in
sale.delivery.services, cost, timelines, pickup points on the map - Payment:
sale.paysystem— payment processing, decline handling, refunds - Order creation: email via
main.mail.event, CRM entry, transfer to 1C viasale.export.1c
Personal account (sale.personal.section):
- Registration, login, password recovery — including the edge case with Cyrillic email addresses
- Order history, reorder
- Subscriptions, loyalty program
Forms and search:
-
form.result.new/iblock.element.add.form— submission, validation, file fields -
search.page— relevance, morphology, typo handling viasearch.title
Regression Testing
After every deployment — one question: did we break anything that was working?
- Smoke tests — homepage loads, catalog returns products, an order goes through to completion. 5 minutes, run after every deploy. If smoke fails — rollback, no questions asked.
- Regression suite — 40–80 test cases covering core scenarios. Before every release.
- Visual testing — screenshot comparison via Percy or Playwright. A button shifted 20px, a font changed after an update — the test shows the diff.
-
Module checklists — structured lists for
sale,catalog,iblock,search. Each module gets its own checklist.
Load Testing
The question isn't "will the site handle it," but "at how many concurrent users does catalog.section start returning 500s."
Load profile for a Bitrix-based store:
| Scenario | Share | Target Response | What Breaks First |
|---|---|---|---|
| Homepage | 20% | < 1 sec | Composite cache, if not configured |
| Catalog with filters | 30% | < 2 sec | MySQL — heavy JOINs on b_iblock_element_property |
| Product page | 25% | < 1.5 sec | Trade offer queries |
| Add to cart | 10% | < 1 sec | Locks on b_sale_basket table |
| Checkout | 5% | < 3 sec | Delivery handlers (external APIs) |
| Search | 10% | < 2 sec | b_search_content without indexes |
Tools:
- k6 — JavaScript scenarios, easy to model cart and checkout business logic
- Apache JMeter — a classic, suitable for complex scenarios with cookie-based authentication
- Yandex.Tank — real-time visualization, integration with Overload
Output: maximum RPS, response times by percentiles p50/p95/p99, bottlenecks (CPU, RAM, MySQL slow queries on b_iblock_element, file cache). Specific recommendations: which index to add, which query to rewrite using D7 ORM, where to enable composite cache.
Cross-Browser Testing
We test where actual buyers are. Analytics from the specific project's metrics matter more than industry-wide data.
Minimum set:
- Chrome (last 2 versions) — the bulk of traffic
- Safari on iOS — critical for mobile checkout,
sale.order.ajaxoften behaves unpredictably - Yandex Browser — significant share in Russia, Chromium-based rendering, but extension quirks exist
- Samsung Internet — mobile Android, often overlooked
Devices:
- Desktop: 1920x1080, 1366x768
- iPhone: 375x812, 390x844 — checkout must be verified
- Android: 360x800, 412x915
Tools: BrowserStack for real devices, Playwright for automation across Chromium/Firefox/WebKit.
Automation
Playwright — the primary choice for E2E on Bitrix:
- Cross-browser: Chromium, Firefox, WebKit
- Parallel execution, automatic waits
- Works well with dynamic
sale.order.ajaxforms - Mobile viewport and geolocation support
Cypress:
- Runs in the browser — more stable for SPA-like interfaces
- Excellent visual runner for debugging
- Limitation: Chromium-based browsers only
PHPUnit for custom code:
- Unit tests for custom Bitrix components and modules
- Business logic testing without frontend dependency
- CI/CD integration — GitLab CI, GitHub Actions
UAT — Acceptance Testing
Final review with the client on a staging environment with live data:
- Together we compile a list of critical scenarios — not 200 test cases, but 15–20 key buyer journeys
- Staging with a copy of the production database (anonymized personal data)
- Rapid bug logging — Jira/YouTrack, prioritization by severity
- Acceptance protocol — a document with results, sign-offs, readiness to launch
QA Process
Testing is embedded in development, not bolted on at the end:
- Requirements analysis — QA participates in task discussions, catches ambiguities. "Does the discount apply to the product or to the order?" — this question at the start saves two days of debugging
- Test cases before development — scenarios are ready before the first line of code
-
Code review — checking for typical Bitrix mistakes: uncleared component cache, raw SQL queries instead of ORM, missing
$USER->IsAuthorized()checks - Functional → regression → deploy
-
Post-release monitoring — errors in
bitrix/error.log, metrics in analytics, alerts on 500s
Timelines
| Task | Timeline |
|---|---|
| Test plan | 2–3 days |
| Functional testing (mid-size store) | 3–5 days |
| Basic E2E test suite (Playwright) | 2–3 weeks |
| Load testing + report | 1–2 weeks |
| Cross-browser | 2–3 days |
| UAT support | 3–5 days |
| QA process from scratch | 3–4 weeks |
A bug in production isn't just the cost of fixing it. It's the orders lost while the bug is live. On a project doing 5M/month in revenue, a broken cart over the weekend means losses that no testing budget could exceed.







