Integrating ChatGPT with Bitrix24
A manager spends 15 minutes drafting a commercial proposal, another 10 writing a follow-up email to a client after a meeting. A supervisor reads 50 lead descriptions daily and manually decides which to pass to the sales department. Support answers the same questions repeatedly in open channels. All of these are tasks that GPT models solve in seconds if connected correctly to Bitrix24.
How It Works Technically
The integration is built on: Bitrix24 REST API ↔ your server (middleware) ↔ OpenAI API. Bitrix24 does not call OpenAI directly — a server-side application sits between them, handling logic, forming prompts, and controlling token spending.
The middleware receives data from Bitrix24 (lead text, chat history, task description), forms a request to OpenAI API (POST /v1/chat/completions), gets the response, and sends the result back to Bitrix24 via REST API.
Connection options:
-
Via chatbot. Register a bot (
imbot.register), user writes to it — the bot sends the message to OpenAI, returns the answer in the chat viaimbot.message.add. Most visual option: an employee communicates with GPT directly in Bitrix24's chat. - Via business process. An activity (action) in the business process designer calls a webhook on the middleware. Runs automatically: on lead creation, on deal stage change, on schedule.
- Via CRM robot. A trigger on the funnel stage calls a webhook — GPT processes deal data and writes the result to a custom field.
AI in CRM: Lead Qualification
One of the most effective scenarios. A lead enters Bitrix24 from a website form, email, or call. Middleware gets lead data (crm.lead.get): name, company, comment, source. Forms a prompt:
You are a B2B sales manager. Rate this lead on a scale from 1 to 10.
Consider: company size, request specificity, budget availability.
Lead data: {name}, {comment}, {source}.
Respond in JSON format: {"score": N, "reason": "...", "recommended_action": "..."}
GPT returns a score. The middleware writes the score to a custom lead field via crm.lead.update, recommendation to a comment. If score > 7 — the robot automatically converts the lead to a deal and assigns the responsible person.
Result: a supervisor doesn't manually sort 50 leads — they see a sorted list with scores and recommendations.
AI in Open Channels
ChatGPT works as a first line in open channels (Telegram, VK, website widget). Technically:
- Client writes to an open channel.
- Event
ONIMCONNECTORMESSAGEADDis sent to middleware. - Middleware forms a prompt with context: chat history (last N messages), company knowledge base (FAQ, product descriptions).
- GPT generates a response.
- Middleware sends the response via
imconnector.send.messages.
Important note: GPT should not answer everything. We set "boundaries" via system prompt: if the question is outside the bot's competence — the bot transfers the conversation to a live operator. For this, middleware calls imopenlines.session.transfer specifying the queue or specific employee.
Text Generation in Deals and Tasks
Practical scenarios:
-
Commercial proposal. Manager clicks a button in the deal card (custom action via
CRM_DEAL_DETAIL_ACTIVITYplacement). Middleware collects deal data, products, contact — sends to GPT with prompt "Draft a proposal for...". Result — a draft text in the deal activity. - Follow-up email. Bot in chat: manager writes brief meeting notes, GPT forms a structured follow-up email.
-
Task description. Manager voice-dictates the essence in the chat → Whisper transcribes → GPT structures into task format with a checklist → bot creates the task via
tasks.task.add.
Token and Cost Management
OpenAI charges by tokens. One lead qualification request — approximately 500–800 tokens (input + output). With 100 leads per day on gpt-4o model, it's about $1–2 per day. But without control, costs grow quickly.
What we do at the middleware level:
-
Token limit per request —
max_tokensparameter in the API call. 200 output tokens are enough for lead qualification. - Model selection by task. Lead qualification — gpt-4o-mini (cheaper, accurate enough). Proposal generation — gpt-4o (better text quality). Routine classification — gpt-3.5-turbo.
- Caching. If a lead with identical text was already processed — get the result from cache.
- Daily budget. Middleware tracks spent tokens per day. When the limit is reached — stops automatic requests and notifies the administrator.
Data Security
CRM data is transmitted to OpenAI servers. What to consider:
- Personal data. Before sending, middleware anonymizes data: removes names, phone numbers, emails from the prompt if not needed for the task. For lead qualification, company name and request text are enough.
- Opt-out from training. API requests (unlike ChatGPT via browser) are not used for OpenAI model training by default. But we include this in the DPA (Data Processing Agreement) with OpenAI.
- Middleware on your server. Data passes through your server — you control what is sent to OpenAI. No direct access for OpenAI to your Bitrix24 portal.
Implementation Timeline
| Scale | Includes | Timeline |
|---|---|---|
| Single scenario | Chatbot or lead qualification, basic prompts | 3–5 days |
| CRM complex | Qualification + text generation + open channels | 1–2 weeks |
| Full integration | All scenarios + custom prompts + admin panel + token spending analytics | 3–4 weeks |
What We Implement
- Middleware server for processing requests between Bitrix24 and OpenAI API
- Chatbot with GPT in Bitrix24 interface
- Automatic lead qualification with score recording in CRM
- AI assistant in open channels with operator transfer
- Text generation (proposals, emails, task descriptions) from deal cards
- Prompt customization for your business
- Token spending control: limits, model selection, caching







