AI-Powered Content Generation for 1C-Bitrix
GPT integration into Bitrix's content pipeline solves a scale problem: writing unique texts for 50,000 products manually is not feasible, and template-based generation produces predictably weak results. AI generation creates diverse content from structured product data — with the right tone, length, and SEO optimization.
What Can Be Automated with AI
- Product descriptions — unique text based on product attributes
- SEO tags — title, description, keywords
- Section headings — H1, H2 for category pages
- FAQ blocks — frequently asked questions for product cards
- Button text and microcopy — calls to action, hints
- Translations — when content exists in one language
Integration with the OpenAI API
A minimal client for the Chat Completions API:
class OpenAiClient {
private string $apiKey;
private string $model = 'gpt-4o-mini';
public function generate(string $prompt, int $maxTokens = 500): string {
$response = (new \GuzzleHttp\Client())->post(
'https://api.openai.com/v1/chat/completions',
[
'headers' => ['Authorization' => "Bearer {$this->apiKey}", 'Content-Type' => 'application/json'],
'json' => [
'model' => $this->model,
'messages' => [['role' => 'user', 'content' => $prompt]],
'max_tokens' => $maxTokens,
],
]
);
return json_decode($response->getBody(), true)['choices'][0]['message']['content'];
}
}
Cost management: GPT-4o-mini costs ~$0.00015 per 1K input tokens. One product description ≈ 200 prompt tokens + 300 response tokens. 10,000 descriptions ≈ $5. GPT-4o is 10× more expensive but delivers significantly better quality.
Prompt Design
Output quality is determined by the prompt. Structure of an effective product description prompt:
You are a copywriter for an electronics online store.
Write a product description in 2–3 paragraphs (150–200 words) for the following product:
Name: {NAME}
Brand: {BRAND}
Specifications: {SPECS_LIST}
Requirements:
- Style: professional, no hyperbole
- First paragraph — main benefit
- Second paragraph — technical specs in usage context
- Third paragraph — who this product is for
- No phrases like "high quality", "excellent choice"
- Language: English
Prompts are stored in a Highload block AiPrompts linked to the product category — different categories require different styles.
Queue System and Rate Limiting
OpenAI enforces limits: 10,000 RPM and 10,000,000 TPM for GPT-4o-mini. Large catalogs require a queue:
CREATE TABLE ai_generation_queue (
id SERIAL PRIMARY KEY,
element_id INT NOT NULL,
task_type VARCHAR(50), -- 'description', 'seo_title', 'faq'
status VARCHAR(20) DEFAULT 'pending',
result TEXT,
tokens_used INT,
error TEXT,
created_at TIMESTAMP DEFAULT NOW()
);
The worker processes no more than 100 requests per minute, inserting pauses between batches.
Quality Control and Moderation
AI can generate irrelevant or incorrect content. Quality control system:
Automated checks:
- Minimum text length (< 50 characters → error)
- Absence of prohibited words/phrases
- Hallucination check — mentions of attributes not passed in the prompt
Manual review flags: items with a low quality score (determined by a second AI request — prompt "Rate the quality of this description on a scale of 1–10 and give a reason") are flagged for manager review.
Project Timeline
| Phase | Duration |
|---|---|
| OpenAI/Anthropic API integration, rate limiter | 1–2 days |
| Category-specific prompt development (iterative) | 2–3 days |
| Queue system, workers | 1–2 days |
| Quality control, moderation | 1–2 days |
| Admin interface, cost statistics | 1 day |
Total: 6–10 working days. Prompt iteration continues for another 1–2 weeks after launch.







