Setting up data exchange via Apache Kafka for 1C-Bitrix

Our company is engaged in the development, support and maintenance of Bitrix and Bitrix24 solutions of any complexity. From simple one-page sites to complex online stores, CRM systems with 1C and telephony integration. The experience of developers is confirmed by certificates from the vendor.
Our competencies:
Development stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1175
  • image_bitrix-bitrix-24-1c_fixper_448_0.png
    Website development for FIXPER company
    811
  • image_bitrix-bitrix-24-1c_development_of_an_online_appointment_booking_widget_for_a_medical_center_594_0.webp
    Development based on Bitrix, Bitrix24, 1C for the company Development of an Online Appointment Booking Widget for a Medical Center
    564
  • image_bitrix-bitrix-24-1c_mirsanbel_458_0.webp
    Development based on 1C Enterprise for MIRSANBEL
    747
  • image_crm_dolbimby_434_0.webp
    Website development on CRM Bitrix24 for DOLBIMBY
    655
  • image_crm_technotorgcomplex_453_0.webp
    Development based on Bitrix24 for the company TECHNOTORGKOMPLEKS
    976

Configuring Data Exchange via Apache Kafka for 1C-Bitrix

Apache Kafka is a distributed event log, not a traditional message broker. Where RabbitMQ fits the task of "deliver a message from A to B with delivery guarantee", Kafka fits "store an event stream, allow multiple consumers to read it independently and replay it from any point". For 1C-Bitrix, Kafka makes sense on large-scale projects with multiple event consumers.

When to Choose Kafka Over RabbitMQ

  • The event stream must be processed by multiple independent systems simultaneously (analytics, CRM, warehouse, marketing).
  • Replaying events from a past period is required (retention of several days or weeks).
  • Event volume reaches hundreds of thousands per minute.
  • Event sourcing: the event is the source of truth from which state is reconstructed.

For a typical online store with a few external systems and no replay requirements, RabbitMQ is simpler and less expensive.

Kafka Client for PHP

There is no official PHP client from Apache. Use arnaud-lb/php-rdkafka (bindings for librdkafka):

# Install librdkafka (Ubuntu)
apt-get install librdkafka-dev
# Install PHP extension
pecl install rdkafka
# PHP wrapper
cd /local && composer require arnaud-lb/php-rdkafka

Producer: Publishing Events from 1C-Bitrix

class KafkaProducer {
    private \RdKafka\Producer $producer;

    public function __construct() {
        $conf = new \RdKafka\Conf();
        $conf->set('metadata.broker.list',
            COption::GetOptionString('site', 'kafka_brokers', 'kafka:9092'));
        $conf->set('security.protocol', 'PLAINTEXT');
        // For production with SSL:
        // $conf->set('security.protocol', 'SSL');
        // $conf->set('ssl.ca.location', '/etc/kafka/certs/ca-cert');

        $this->producer = new \RdKafka\Producer($conf);
    }

    public function publish(string $topic, string $key, array $payload): void {
        $topic = $this->producer->newTopic($topic);
        $topic->produce(
            \RD_KAFKA_PARTITION_UA, // automatic partition selection
            0,
            json_encode($payload),
            $key // partition key — e.g. user_id to preserve per-user event order
        );
        $this->producer->flush(1000); // wait up to 1 second for acknowledgement
    }
}

// Usage in event handlers
AddEventHandler('sale', 'OnSaleOrderSaved', function($order) {
    $kafka = new KafkaProducer();
    $kafka->publish('bitrix.orders', (string)$order->getUserId(), [
        'event'    => $order->isNew() ? 'order.created' : 'order.updated',
        'order_id' => $order->getId(),
        'status'   => $order->getField('STATUS_ID'),
        'total'    => $order->getPrice(),
        'ts'       => time(),
    ]);
});

Consumer: Event Consumer

The consumer runs as a separate daemon (not in the 1C-Bitrix context — in the PHP-CLI context):

// kafka_consumer.php
$conf = new \RdKafka\Conf();
$conf->set('group.id', 'crm-sync-group');
$conf->set('metadata.broker.list', 'kafka:9092');
$conf->set('auto.offset.reset', 'latest'); // read from the end, not the beginning

$consumer = new \RdKafka\KafkaConsumer($conf);
$consumer->subscribe(['bitrix.orders', 'bitrix.products']);

while (true) {
    $message = $consumer->consume(5000); // 5-second timeout
    if ($message->err === \RD_KAFKA_RESP_ERR_NO_ERROR) {
        $payload = json_decode($message->payload, true);
        try {
            EventDispatcher::dispatch($message->topic_name, $payload);
            // Kafka manages offsets automatically when using group.id
        } catch (\Throwable $e) {
            // Log the error, do not commit offset — message will be re-read
            error_log("Kafka consumer error: " . $e->getMessage());
        }
    }
}

Topics and Partitions

Topic Partition Key Consumers
bitrix.orders user_id CRM, warehouse, analytics
bitrix.products iblock_element_id Search index, recommendations
bitrix.users user_id CDP, email marketing
bitrix.carts user_id Abandoned cart analytics

Number of partitions = maximum consumer parallelism. Start with 3–6 partitions per topic.

Kafka Monitoring

Consumer lag is the key metric: the difference between the last published message and the last message read by a consumer. Growing lag means the consumer cannot keep up. Monitor via Kafka UI (Provectus) or CMAK, with alerts sent to Telegram via alertmanager.

Setting up Kafka for 1C-Bitrix with two or three topics and a test consumer takes 2–3 working days for infrastructure and 1 day for the 1C-Bitrix integration.