Setting up Kafka topics and partitions
The number of partitions and replication factor are two parameters that cannot be easily changed after topic creation. Decreasing the number of partitions is impossible without full topic recreation. Therefore, correct configuration at creation time is important.
How partitions work
A partition is the unit of parallelism in Kafka. One consumer in a group processes one partition. If a topic has 6 partitions, a maximum of 6 consumers in a group can read in parallel. Extra consumers are idle.
Writing to a partition is strictly ordered. Global ordering across the topic is not guaranteed — only within a partition. This matters for events that must be processed sequentially (all actions of one user).
Topic: user-events (6 partitions)
Partition 0: event1(user:101), event4(user:205), ...
Partition 1: event2(user:102), event5(user:101), ... ← user:101 split across partitions!
Partition 2: event3(user:103), ...
To ensure events of one user go to one partition — use message key:
Write with key user_id → hash(user_id) % num_partitions → always one partition
Calculating partition count
Practical rule: num_partitions = max(throughput_target / throughput_per_partition, num_consumers_target).
Typical throughput of one partition: 10–50 MB/s for writes (depends on hardware and broker configuration).
Example: need to process 200 MB/s with peak 400 MB/s and maintain ability to scale to 20 consumers → take 24 partitions (divisible by 6, 8, 12 for convenient scaling).
Too many partitions — also bad: each partition requires filehandle, buffer memory, overloads controller during leader election.
Creating topics via kafka-topics.sh
# Basic topic for user events
kafka-topics.sh --bootstrap-server kafka-1:9092 \
--create \
--topic user-events \
--partitions 12 \
--replication-factor 3 \
--config retention.ms=604800000 \
--config retention.bytes=10737418240 \
--config compression.type=lz4 \
--config min.insync.replicas=2 \
--config message.max.bytes=1048576
# Compact topic — for storing last state per key
kafka-topics.sh --bootstrap-server kafka-1:9092 \
--create \
--topic user-profiles \
--partitions 24 \
--replication-factor 3 \
--config cleanup.policy=compact \
--config min.cleanable.dirty.ratio=0.1 \
--config segment.ms=3600000 \
--config delete.retention.ms=86400000
# High-priority queue with short retention
kafka-topics.sh --bootstrap-server kafka-1:9092 \
--create \
--topic order-processing-priority \
--partitions 6 \
--replication-factor 3 \
--config retention.ms=3600000 \
--config max.message.bytes=102400
Management via Admin API (Java/Kotlin)
Creating topics programmatically — right for applications that create topics dynamically:
Properties props = new Properties();
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "kafka-1:9092,kafka-2:9092,kafka-3:9092");
props.put(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, 5000);
props.put(AdminClientConfig.DEFAULT_API_TIMEOUT_MS_CONFIG, 10000);
try (AdminClient admin = AdminClient.create(props)) {
NewTopic userEvents = new NewTopic("user-events", 12, (short) 3);
userEvents.configs(Map.of(
"retention.ms", "604800000",
"compression.type", "lz4",
"min.insync.replicas", "2"
));
NewTopic deadLetter = new NewTopic("user-events-dlq", 3, (short) 3);
deadLetter.configs(Map.of(
"retention.ms", "2592000000", // 30 days
"retention.bytes", "-1"
));
CreateTopicsResult result = admin.createTopics(List.of(userEvents, deadLetter));
result.all().get(30, TimeUnit.SECONDS);
}
Modifying existing topic configuration
# Increase retention
kafka-configs.sh --bootstrap-server kafka-1:9092 \
--alter \
--entity-type topics \
--entity-name user-events \
--add-config retention.ms=1209600000
# Add partitions (increase only!)
kafka-topics.sh --bootstrap-server kafka-1:9092 \
--alter \
--topic user-events \
--partitions 24
# Warning: adding partitions breaks ordering for keyed messages.
# Existing keys will go to same partitions (hash % 12),
# new keys will distribute across 24 partitions.
# View topic configuration
kafka-configs.sh --bootstrap-server kafka-1:9092 \
--describe \
--entity-type topics \
--entity-name user-events
Managing partition leaders
Uneven leader distribution across brokers causes hot nodes:
# Check leader distribution
kafka-topics.sh --bootstrap-server kafka-1:9092 \
--describe --topic user-events
# Preferred replica rebalancing
kafka-leader-election.sh --bootstrap-server kafka-1:9092 \
--election-type preferred \
--all-topic-partitions
# Or for specific partitions via JSON
cat > election.json << 'EOF'
{
"partitions": [
{"topic": "user-events", "partition": 0},
{"topic": "user-events", "partition": 1}
]
}
EOF
kafka-leader-election.sh --bootstrap-server kafka-1:9092 \
--election-type preferred \
--path-to-json-file election.json
Monitoring partitions
# Consumer lag — group lag
kafka-consumer-groups.sh --bootstrap-server kafka-1:9092 \
--describe --group my-consumer-group
# Output: TOPIC / PARTITION / CURRENT-OFFSET / LOG-END-OFFSET / LAG
# Total lag > 10000 for critical topics — reason for alert
# Offset reset (if need to re-read from beginning)
kafka-consumer-groups.sh --bootstrap-server kafka-1:9092 \
--group my-consumer-group \
--topic user-events \
--reset-offsets --to-earliest \
--execute
Typical configurations by data type
| Topic | Partitions | Replication | Cleanup | Retention |
|---|---|---|---|---|
| Transactions | 12–24 | 3 (min.isr=2) | delete | 7–30 days |
| Audit log | 6–12 | 3 (min.isr=2) | delete | 90–365 days |
| Profiles (CDC) | 24–48 | 3 | compact | unlimited |
| Metrics | 12 | 2 | delete | 24–48 hours |
| Notifications | 6 | 3 | delete | 1–3 days |
Timeline
Day 1 — analyze requirements: throughput, number of consumers, ordering requirements, retention. Design topic schema.
Day 2 — create topics, configure ACL (if authentication enabled), test producer/consumer with correct keys.
Day 3 — setup consumer lag monitoring, alerts, document schema for team.







