Skip to Main Content

Deploy Multi-Nodes Cluster on Linux

This document outlines the process for deploying a multi-node AutoMQ cluster on Linux hosts within a public cloud environment, utilizing object storage services offered by cloud providers. In this development setup, users can test cluster-related features such as partition reassignment and automatic data balancing.

Tip

For those interested in deploying AutoMQ in a private data center, Overview▸ provides insights into using software like MinIO, Ceph, and CubeFS to deliver object storage services.

Beyond the Linux host deployment solution, users can explore additional deployment options outlined in the following documents to validate cluster-related features.

Tip

Deploying and tuning AutoMQ for production environments can be quite intricate. You can contact the AutoMQ team for guidance and best practices through this form.

Moreover, to completely bypass the installation and deployment process, you can explore AutoMQ's fully managed cloud services via the link provided. A free 2-week trial is available on all cloud marketplaces.

Prerequisites

This document provides a sample for deploying a 5-node AutoMQ cluster, where 3 nodes serve as both Controller and Broker, and the remaining 2 nodes operate solely as Broker.

Ensure that the following requirements are verified beforehand:

  • 5 Linux hosts are recommended. For deploying the AutoMQ cluster, it is advised to use virtual machines optimized for network performance, with at least 2 cores and 16GB of memory. Ensure that system disk storage space is at least 20GB, and data volume disk space is no less than 20GB (this requirement only applies to deploying Controller nodes). See the example below:

    Role
    IP
    Node ID
    System Volume
    Data Volume
    Controller + Broker
    192.168.0.1
    0
    EBS 20GB
    EBS 20GB
    Controller + Broker
    192.168.0.2
    1
    EBS 20GB
    EBS 20GB
    Controller + Broker
    192.168.0.3
    2
    EBS 20GB
    EBS 20GB
    Broker
    192.168.0.4
    1000
    EBS 20GB
    Not Applicable
    Broker
    192.168.0.5
    1001
    EBS 20GB
    Not Applicable
  • Download the binary installation package for installing AutoMQ. Refer to Software Artifact▸.

  • Create two custom named object storage buckets, using automq-data and automq-ops as examples.

  • Create an IAM user and generate an Access Key ID and Access Key Secret for them. Then, ensure that this IAM user has full read and write permissions on the previously created object storage buckets.

Tip

AutoMQ relies on Kafka KRaft components to maintain cluster metadata; therefore, each cluster in a production scenario needs to deploy at least 3 nodes (running both Controller and Broker).

It is recommended to refer to the Policy and Endpoint & Region links to consult AWS documentation for more detailed IAM and S3 configuration instructions.

Info

In production environments, it is advised that the scope of IAM Policy authorization be precise to the specific bucket to avoid unintended authorizations.


{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"*"
]
}
]
}

Installing and Starting the AutoMQ Cluster

Step 1: Create a Cluster Deployment Project

AutoMQ provides the automq-cli.sh tool for managing AutoMQ clusters. The command automq-cli.sh cluster create [project] will automatically create a cluster configuration template in the current directory clusters/[project]/topo.yaml.


bin/automq-cli.sh cluster create poc

Sample execution results are shown below:


Success create AutoMQ cluster project: poc
========================================================
Please follow the steps to deploy AutoMQ cluster:
1. Modify the cluster topology config clusters/poc/topo.yaml to fit your needs
2. Run ./bin/automq-cli.sh cluster deploy --dry-run clusters/poc , to deploy the AutoMQ cluster

Step 2: Edit the Cluster Configuration Template

Edit the configuration template generated in Step 1 for the cluster. An example of the configuration template is as follows:


global:
clusterId: ''
# Bucket URI Pattern: 0@s3://$bucket?region=$region&endpoint=$endpoint
# Bucket URI Example:
# AWS : 0@s3://xxx_bucket?region=us-east-1
# OCI: 0@s3://xxx_bucket?region=us-ashburn-1&endpoint=https://xxx_namespace.compat.objectstorage.us-ashburn-1.oraclecloud.com&pathStyle=true
config: |
s3.data.buckets=0@s3://xxx_bucket?region=us-east-1
s3.ops.buckets=1@s3://xxx_bucket?region=us-east-1
s3.wal.path=0@s3://xxx_bucket?region=us-east-1
log.dirs=/root/kraft-logs
envs:
- name: KAFKA_S3_ACCESS_KEY
value: 'xxxxx'
- name: KAFKA_S3_SECRET_KEY
value: 'xxxxx'
controllers:
# The Controllers Default Are Combined Nodes Which Roles Are Controller and Broker.
# The Default Controller Port Is 9093 and the Default Broker Port Is 9092
- host: 192.168.0.1
nodeId: 0
- host: 192.168.0.2
nodeId: 1
- host: 192.168.0.3
nodeId: 2
brokers:
- host: 192.168.0.5
nodeId: 1000
- host: 192.168.0.6
nodeId: 1001

  • global.clusterId : A randomly generated unique ID that does not require modification.

  • global.config : Custom incremental configuration for all nodes in the cluster. You must change s3.data.buckets, s3.ops.buckets, and s3.wal.path to reflect actual values. You may additionally add new configuration items by separating them with line breaks.

  • global.envs : Environment variables for the nodes. You must replace the values for KAFKA_S3_ACCESS_KEY and KAFKA_S3_SECRET_KEY with actual values.

  • controllers: List of Controller nodes; replace with actual values.

  • brokers: List of Broker nodes; replace with actual values.

Step 3: Launch AutoMQ

Execute the cluster pre-check command below to generate the final startup command.


bin/automq-cli.sh cluster deploy --dry-run clusters/poc

This command will initially check the accuracy of the S3 configuration and ensure successful access to S3, then provide the startup commands for each node. The output example is as follows:


Host: 192.168.0.1
KAFKA_S3_ACCESS_KEY=xxxx KAFKA_S3_SECRET_KEY=xxxx ./bin/kafka-server-start.sh -daemon config/kraft/server.properties --override cluster.id=JN1cUcdPSeGVnzGyNwF1Rg --override node.id=0 --override controller.quorum.voters=0@192.168.0.1:9093,1@192.168.0.2:9093,2@192.168.0.3:9093 --override advertised.listener=192.168.0.1:9092 --override s3.data.buckets='0@s3://xxx_bucket?region=us-east-1' --override s3.ops.buckets='1@s3://xxx_bucket?region=us-east-1' --override s3.wal.path='0@s3://xxx_bucket?region=us-east-1' --override log.dirs='/root/kraft-logs'

...

To start the cluster, execute the commands from the previous step sequentially on the specified CONTROLLER or BROKER host. For instance, to start the first CONTROLLER process on 192.168.0.1, run the command from the generated startup command list for this host.


KAFKA_S3_ACCESS_KEY=xxxx KAFKA_S3_SECRET_KEY=xxxx ./bin/kafka-server-start.sh -daemon config/kraft/server.properties --override cluster.id=JN1cUcdPSeGVnzGyNwF1Rg --override node.id=0 --override controller.quorum.voters=0@192.168.0.1:9093,1@192.168.0.2:9093,2@192.168.0.3:9093 --override advertised.listeners=PLAINTEXT://192.168.0.1:9092 --override s3.data.buckets='0@s3://xxx_bucket?region=us-east-1' --override s3.ops.buckets='1@s3://xxx_bucket?region=us-east-1' --override s3.wal.path='0@s3://xxx_bucket?region=us-east-1' --override log.dirs='/root/kraft-logs'

Testing Message Sending and Receiving

After installing and starting the AutoMQ cluster, you can test functions such as message sending and consumption by executing the Kafka CLI commands found in the bin directory of the installation package.

  1. Run the following command to use kafka-topics.sh and create a Topic.

bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server 192.168.0.1:9092,192.168.0.2:9092,192.168.0.3:9092

  1. Run the following command to use kafka-console-producer.sh to send test messages.

bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server 192.168.0.1:9092,192.168.0.2:9092,192.168.0.3:9092

  1. Run the following command to invoke kafka-console-consumer.sh and consume test messages.

bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server 192.168.0.1:9092,192.168.0.2:9092,192.168.0.3:9092

Experience AutoMQ Cluster Features

Beyond the basic message production and consumption capabilities, AutoMQ supports second-level partition reassignment, automatic data rebalancing, and other features, addressing the limitations of Apache Kafka in swiftly reassigning, scaling out, or scaling in. You can refer to the following documentation for testing and validation:

Stop and Uninstall AutoMQ Cluster

After completing the tests, you can follow the steps below to stop and uninstall the AutoMQ Cluster.

  1. Execute the following command on each node to stop the process.

bin/kafka-server-stop.sh

  1. Configure lifecycle rules for object storage to automatically clear the data in s3-data-bucket and s3-ops-bucket, then delete these buckets.

  2. Delete the created compute instance along with its associated system volume and data volume.

  3. Delete the IAM user created for testing along with its associated Access Key Id and Access Key Secret.