AI-Powered Product Description Generation for 1C-Bitrix
The task is concrete: take a dry set of attributes from a Bitrix infoblock, send them to a language model, and get back a readable, compelling text ready to publish on a product card — without copying competitor content, without manual copywriting for every SKU.
Gathering Product Data for the Prompt
Output quality is directly proportional to input quality. Collect as much structured information as possible:
function buildProductContext(int $elementId): string {
$element = CIBlockElement::GetByID($elementId)->GetNextElement();
$fields = $element->GetFields();
$props = $element->GetProperties();
$context = "Product: {$fields['NAME']}\n";
$context .= "Catalog section: " . getSectionName($fields['IBLOCK_SECTION_ID']) . "\n";
foreach ($props as $prop) {
if (!empty($prop['VALUE']) && $prop['CODE'] !== 'MORE_PHOTO') {
$context .= "{$prop['NAME']}: {$prop['~VALUE']}\n";
}
}
// Add price for context
$price = CCatalogProduct::GetOptimalPrice($elementId);
$context .= "Price: {$price['PRICE']['PRICE']} {$price['PRICE']['CURRENCY']}\n";
return $context;
}
Properties of type MORE_PHOTO and other technical properties are excluded — only semantically relevant data belongs in the prompt.
Multi-Level Prompt System
Not a single prompt for all products — separate templates for each product category. Electronics and children's clothing require fundamentally different tones and structures.
Prompt system with inheritance:
- Base prompt — general instructions on style, prohibited phrases, structure
- Category prompt — category-specific tone (technical language for electronics, emotional for lifestyle products)
- Infoblock prompt — store-specific requirements (brand voice)
Inheritance: if no category prompt is defined, the parent category prompt is used, then the base prompt.
Output Formatting and Structure
Asking the AI to return raw HTML is unreliable — the model may break the structure. It is more reliable to request structured JSON:
{
"preview_text": "Short description up to 100 words for listings",
"detail_text": "Full description 200–350 words with HTML formatting",
"bullet_points": ["key benefit 1", "key benefit 2"],
"target_audience": "Who this product is for"
}
JSON mode is available in OpenAI via response_format: {type: "json_object"}. Bullet points are used for the "Benefits" block on the product card via a multiple-value infoblock property of type S.
Batch Processing and Context Window
GPT-4o-mini has a context window of 128K tokens. Sending a batch of multiple products in one request reduces overhead from system tokens:
Describe the following 5 products. Return a JSON array of 5 objects...
[product 1]
[product 2]
...
A batch of 5 products saves ~30% of tokens on system instructions. However, if the request fails the entire batch is lost — implement retry with backoff and keep batch sizes to 3–5 products maximum.
A/B Testing Prompts
For highly competitive categories, testing different prompts is worthwhile:
- Group A: technical style (specifications → benefits)
- Group B: emotional style (lifestyle → technical details)
Bitrix supports A/B testing through the marketing module, but a simpler approach is to store the prompt variant in an element property and measure conversion through analytics goals.
Case Study: Generating Descriptions for 28,000 SKUs
Task: home appliances, 3 categories (large, small, climate), each with different text requirements.
Implementation:
- 3 category prompts developed together with the marketing team over 2 days
- GPT-4o-mini for the bulk (80%), GPT-4o for the top 100 SKUs by revenue
- Batches of 3 products, 8 parallel workers
- Total generation time for 28,000 descriptions — 14 hours
- Cost: $42 on GPT-4o-mini + $18 on GPT-4o = $60 total
Result: organic traffic to product pages increased by 34% within 3 months of indexing.
Project Timeline
| Phase | Duration |
|---|---|
| Prompt system design, iterations | 2–4 days |
| Product context builder, infoblock integration | 1–2 days |
| Batch generator, queues, retry | 1–2 days |
| Quality control, moderation | 1 day |
| A/B tests, analytics | 2–3 days (optional) |
Total: 5–9 working days to the first production generation run.







