Skip to Main Content

Example: Self-Balancing when Cluster Nodes Change

This document outlines the use of Kafka CLI tools to verify automatic partition reassignment and data balancing support during AutoMQ cluster scaling operations. The Kafka CLI tool is executed through the Docker image provided by AutoMQ.

  1. Create a topic with 16 partitions and send a balanced load.

  2. Observe the partitions automatically reassigning across different brokers when starting and stopping brokers.

This automatic data balancing is an inherent feature of AutoMQ, ensuring data is evenly distributed across the cluster. Monitoring the partition distribution and broker load provides validation that the automatic partition balancing functions as expected.

Prerequisites

Before conducting automatic partition data rebalancing tests, ensure the following conditions are met:

Complete the installation and deployment of the AutoMQ cluster. You can refer to the following methods to install and deploy AutoMQ:

If deploying the cluster via Deploy Multi-Nodes Cluster on Linux▸ or Deploy Multi-Nodes Cluster on Kubernetes▸, ensure that, when starting the Controller, you set autobalancer.controller.enable to true to enable automatic data rebalancing.

Additionally, the host running the test program must meet the following conditions:

  • Linux/Mac/Windows Subsystem for Linux

  • Docker

Experience Partition Reassignment Triggered by Changes in the Cluster Nodes.

If the previous AutoMQ cluster was deployed according to Deploy Multi-Nodes Test Cluster on Docker▸, the obtained cluster bootstrap address would be similar to "server1:9092,server2:9092,server3:9092", and the AutoMQ cluster would be located under the "automq_net" Docker network.

Please replace the bootstrap-server address below with the actual cluster address based on your deployment configuration.

Create Topic


docker run --network automq_net automqinc/automq:1.5.0 /bin/bash -c "/opt/kafka/kafka/bin/kafka-topics.sh --partitions 16 --create --topic self-balancing-topic --bootstrap-server server1:9092,server2:9092,server3:9092"

View Partition Distribution


docker run --network automq_net automqinc/automq:1.5.0 /bin/bash -c "/opt/kafka/kafka/bin/kafka-topics.sh --topic self-balancing-topic --describe --bootstrap-server server1:9092,server2:9092,server3:9092"


Topic: self-balancing-topic TopicId: AjoAB22YRRq7w6MdtZ4hDA PartitionCount: 16 ReplicationFactor: 1 Configs: min.insync.replicas=1,elasticstream.replication.factor=1,segment.bytes=1073741824
Topic: self-balancing-topic Partition: 0 Leader: 2 Replicas: 2 Isr: 2
Topic: self-balancing-topic Partition: 1 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 2 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 3 Leader: 2 Replicas: 2 Isr: 2
Topic: self-balancing-topic Partition: 4 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 5 Leader: 2 Replicas: 2 Isr: 2
Topic: self-balancing-topic Partition: 6 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 7 Leader: 2 Replicas: 2 Isr: 2
Topic: self-balancing-topic Partition: 8 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 9 Leader: 2 Replicas: 2 Isr: 2
Topic: self-balancing-topic Partition: 10 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 11 Leader: 2 Replicas: 2 Isr: 2
Topic: self-balancing-topic Partition: 12 Leader: 2 Replicas: 2 Isr: 2
Topic: self-balancing-topic Partition: 13 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 14 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 15 Leader: 2 Replicas: 2 Isr: 2

Start Producer


docker run --network automq_net automqinc/automq:1.5.0 /bin/bash -c "/opt/kafka/kafka/bin/kafka-producer-perf-test.sh --topic self-balancing-topic --num-records=1024000 --throughput 5120 --record-size 1024 --producer-props bootstrap.servers=server1:9092,server2:9092,server3:9092 linger.ms=100 batch.size=524288 buffer.memory=134217728 max.request.size=67108864"

Start the Consumer


docker run --network automq_net automqinc/automq:1.5.0 /bin/bash -c "/opt/kafka/kafka/bin/kafka-consumer-perf-test.sh --topic self-balancing-topic --show-detailed-stats --timeout 300000 --messages=1024000 --reporting-interval 1000 --bootstrap-server=server1:9092,server2:9092,server3:9092"

Stop the Broker

Stop a server, causing the partition reassignment to other nodes. After stopping, you can observe the recovery status of producers and consumers.


docker stop automq-server3

After stopping, you can see the following logs from the producer:


[2024-04-29 05:00:03,436] WARN [Producer clientId=perf-producer-client] Got error produce response with correlation id 49732 on topic-partition self-balancing-topic-7, retrying (2147483641 attempts left). Error: NOT_LEADER_OR_FOLLOWER (org.apache.kafka.clients.producer.internals.Sender)
[2024-04-29 05:00:03,438] WARN [Producer clientId=perf-producer-client] Received invalid metadata error in produce request on partition self-balancing-topic-7 due to org.apache.kafka.common.errors.NotLeaderOrFollowerException: For requests intended only for the leader, this error indicates that the broker is not the current leader. For requests intended for any replica, this error indicates that the broker is not a replica of the topic partition.. Going to request metadata update now (org.apache.kafka.clients.producer.internals.Sender)

After waiting for several seconds, you can observe the production and consumption return to normal.


2024-05-07 11:56:08,920] WARN [Producer clientId=perf-producer-client] Received invalid metadata error in produce request on partition self-balancing-topic-3 due to org.apache.kafka.common.errors.NotLeaderOrFollowerException: For requests intended only for the leader, this error indicates that the broker is not the current leader. For requests intended for any replica, this error indicates that the broker is not a replica of the topic partition.. Going to request metadata update now (org.apache.kafka.clients.producer.internals.Sender)
[2024-05-07 11:56:08,920] WARN [Producer clientId=perf-producer-client] Got error produce response with correlation id 42141 on topic-partition self-balancing-topic-3, retrying (2147483646 attempts left). Error: NOT_LEADER_OR_FOLLOWER (org.apache.kafka.clients.producer.internals.Sender)
[2024-05-07 11:56:08,920] WARN [Producer clientId=perf-producer-client] Received invalid metadata error in produce request on partition self-balancing-topic-3 due to org.apache.kafka.common.errors.NotLeaderOrFollowerException: For requests intended only for the leader, this error indicates that the broker is not the current leader. For requests intended for any replica, this error indicates that the broker is not a replica of the topic partition.. Going to request metadata update now (org.apache.kafka.clients.producer.internals.Sender)
[2024-05-07 11:56:08,588] 25693 records sent, 5138.6 records/sec (5.02 MB/sec), 8.9 ms avg latency, 1246.0 ms max latency.
[2024-05-07 11:56:13,589] 25607 records sent, 5120.4 records/sec (5.00 MB/sec), 1.8 ms avg latency, 44.0 ms max latency.
[2024-05-07 11:56:18,591] 25621 records sent, 5121.1 records/sec (5.00 MB/sec), 1.6 ms avg latency, 10.0 ms max latency.

Re-examining Partition Distribution

After the producer resumes writing, we reassess the partition distribution and notice that all partitions are situated on broker1. AutoMQ promptly and efficiently handles the reassignment of partitions from the stopped nodes and rebalances the traffic.


docker run --network automq_net automqinc/automq:1.5.0 /bin/bash -c "/opt/kafka/kafka/bin/kafka-topics.sh --topic self-balancing-topic --describe --bootstrap-server server1:9092,server2:9092,server3:9092"


Topic: self-balancing-topic TopicId: AjoAB22YRRq7w6MdtZ4hDA PartitionCount: 16 ReplicationFactor: 1 Configs: min.insync.replicas=1,elasticstream.replication.factor=1,segment.bytes=1073741824
Topic: self-balancing-topic Partition: 0 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 1 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 2 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 3 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 4 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 5 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 6 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 7 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 8 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 9 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 10 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 11 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 12 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 13 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 14 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 15 Leader: 1 Replicas: 1 Isr: 1

Restarting the Broker

We restart automq-server3 to initiate the automatic reassignment of partitions. After several seconds of retries by producers and consumers, operations can continue.


docker start automq-server3

At this stage, recheck the partition distribution, and you will find that the partitions have been automatically reassigned.


Topic: self-balancing-topic TopicId: AjoAB22YRRq7w6MdtZ4hDA PartitionCount: 16 ReplicationFactor: 1 Configs: min.insync.replicas=1,elasticstream.replication.factor=1,segment.bytes=1073741824
Topic: self-balancing-topic Partition: 0 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 1 Leader: 2 Replicas: 2 Isr: 2
Topic: self-balancing-topic Partition: 2 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 3 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 4 Leader: 2 Replicas: 2 Isr: 2
Topic: self-balancing-topic Partition: 5 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 6 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 7 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 8 Leader: 2 Replicas: 2 Isr: 2
Topic: self-balancing-topic Partition: 9 Leader: 2 Replicas: 2 Isr: 2
Topic: self-balancing-topic Partition: 10 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 11 Leader: 2 Replicas: 2 Isr: 2
Topic: self-balancing-topic Partition: 12 Leader: 2 Replicas: 2 Isr: 2
Topic: self-balancing-topic Partition: 13 Leader: 2 Replicas: 2 Isr: 2
Topic: self-balancing-topic Partition: 14 Leader: 1 Replicas: 1 Isr: 1
Topic: self-balancing-topic Partition: 15 Leader: 1 Replicas: 1 Isr: 1