Introduction

AutoMQ is a cloud-native streaming platform that is fully compatible with the Apache Kafka protocol. This topic describes that when deploying a multi-node AutoMQ cluster based on Kubernetes, it is possible to use Kubernetes application lifecycle management tools such as custom Helm Chart , Operator to flexibly transform and adapt relevant dependent resources and configurations, and finally complete the deployment in a complex environment.

Analysis of Differences between AutoMQ & Apache Kafka Docker Images

Init & Startup

-
Apache Kafka Open Source Image
AutoMQ Open Source Image
Startup Mode
encapsulates the default startup logic and is Out Of The Box
Compatible with the startup logic of the Apache Kafka Docker image, Out Of The Box
Configuration Method
  • Provides configuration conversion logic for environment variables with specific rules
  • Fixed mount path: /opt/kafka/config/server.properties
  • Configuration conversion logic for environment variables compatible with Apache Kafka docker image
  • server.properties can also be flexibly injected using other methods such as Helm Chart/Operator
  • The mount path of the configuration file is consistent with that of the Apache Kafka Docker image
Simple example
Apache Kafka Docker Hub
AutoMQ for Apache Kafka

AutoMQ-specific configuration

AutoMQ employs a shared storage architecture and requires additional startup configuration parameters. It is important to note that all parameters detailed later in this chapter fall under these additional requirements and are mandatory for proper operation.

S3 primary storage

Required shared storage parameters:

# Object Storage settings. You will need to replace the placeholder values (marked with `${...}`), such as the S3 bucket names (`ops-bucket`, `data-bucket`), bucket region, and endpoint.
s3.data.buckets=0@s3://${data-bucket}?region=${region}&endpoint=${endpoint}
s3.wal.path=0@s3://${data-bucket}?region=${region}&endpoint=${endpoint}
s3.ops.buckets=1@s3://${ops-bucket}?region=${region}&endpoint=${endpoint}
elasticstream.enable=true

Kraft

node.id
Among the many Kubernetes application lifecycle management tools, there is already a feature that dynamically injects node.id into server.properties based on the Kubernetes Statefulset ID prefix; however, additional necessary generation rules are still required:
  • Controller node.id needs to increment starting from 0 (min.id=0)
  • Broker node.id needs to increment starting from 1000 (min.id=1000)

# controller node.id start with 0
node.id=0

# broker node.id start with 1000
node.id=1000

role
We recommend that Controller serve as both controller and broker roles simultaneously

process.roles=broker,controller

Listener

AutoMQ is fully compatible with the Kafka Listener system and can be customized to set PLAINTEXT , SASL , TLS , mTLS protocols, with the default being PLAINTEXT . It is worth noting that if the inter-broker protocol is not PLAINTEXT, you need to set the corresponding inter-broker authentication protocol (taking SASL_PLAINTEXT as an example) for the AutoBalancer component of AutoMQ in the configuration:

# need replace SASL_PLAINTEXT "inter_broker_user" and "interborker-password-placeholder"
autobalancer.client.auth.sasl.mechanism=PLAIN
autobalancer.client.auth.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="${inter_broker_user}" password="${interbroker-password-placeholder}" user_inter_broker_user="${interbroker-password-placeholder}";
autobalancer.client.auth.security.protocol=SASL_PLAINTEXT
autobalancer.client.listener.name=INTERNAL

AutoMQ-specific Env

Configure and use the AK/SK with read and write permissions to access your Bucket:

- name: "AWS_ACCESS_KEY_ID"
  value: "yourAccessKey"
- name: "AWS_SECRET_ACCESS_KEY"
  value: "yourSecretKey"

The actual AK/SK can be stored in Kubernetes Secrets and referenced to ensure the security of AK/SK

No configuration required

AutoMQ, which builds its core architecture on object storage, has fundamental differences from traditional Kafka: object storage itself provides high durability and redundancy capabilities. Therefore, some replica-related parameters in traditional Kafka that are used to ensure data reliability, the following replica-related configurations are no longer needed or have no effect in AutoMQ:

offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2
default.replication.factor=1
min.insync.replicas=1

Resource request

Due to the difference in architecture, AutoMQ relies more on object storage compared to Kafka. In contrast, AutoMQ no longer requires local Data Log disk storage Therefore, when only Metadata PV is retained for the Controller, all other PVCs can be removed.

Observability

AutoMQ provides a complete set of methods for building an observable system. For details, see: AutoMQ Observability

Deployment Recommendations

When using your custom Helm Chart or Operator, it is best to first try using AutoMQ Labs for deployment experience, which demonstrates the unified configuration pass-through method by setting Env and reusing the Apache Kafka Docker Image. This will help you better use your custom Kubernetes application lifecycle management tool to deploy AutoMQ.

Image

Specify the AutoMQ Docker Image for your main container Reference

automqinc/automq:1.6.0-rc0-kafka

Config & Env

We recommend uniformly using Env to set the above-mentioned specific dependency configuration for AutoMQ (the way to pass in the configuration of the Apache Kafka Docker Image), and using KAFKA_ as the prefix to set the key-value pairs that we ultimately want to pass through to server.properties.

- name: "KAFKA_PROCESS_ROLES"
  value: "controller,broker"
- name: "KAFKA_S3_DATA_BUCKETS"
  value: "0@s3://${data-bucket}?region=${region}&endpoint=${endpoint}"
- name: "KAFKA_S3_WAL_PATH"
  value: "0@s3://${data-bucket}?region=${region}&endpoint=${endpoint}"
- name: "KAFKA_S3_OPS_BUCKETS"
  value: "1@s3://${ops-bucket}?region=${region}&endpoint=${endpoint}"
- name: "KAFKA_ELASTICSTREAM_ENABLE"
  value: "true"

and the Env actually needed

- name: "AWS_ACCESS_KEY_ID"
  value: "yourAccessKey"
- name: "AWS_SECRET_ACCESS_KEY"
  value: "yourSecretKey"

Other production-level recommendations

Production Deployment

Server Resource Configuration

We recommend that each Pod for resource deployment of AutoMQ be deployed exclusively and run on resources of 2Core16GB or higher, and appropriately adjust the requests and limits of the AutoMQ server (Statefulset) in Kubernetes.
Note: The specific resource configuration method depends on the capabilities and entry points provided by your Kubernetes application lifecycle management tool

Scheduling Policy

Through nodeAffinities , tolerations , podAntiAffinities strategies, AutoMQ can implement fine-grained scheduling strategies in Kubernetes. We recommend that production-level AutoMQ be exclusive and non-mixed deployment.
Note: The specific way to set up scheduling depends on the capabilities and entry points provided by your Kubernetes application lifecycle management tool.

Replicas

Through the Kafka cluster of your Kubernetes application lifecycle management tool, it supports modifying configurations such as replicas to achieve the purpose of horizontal volume expansion and contraction.
Note: However, we recommend that the cluster be deployed with 3 Controller Pods and several Broker Pods by default, and do not recommend dynamically adjusting the replicas of the Controller to avoid unexpected risks.

AutoScaling (HPA)

Use your Kubernetes application lifecycle management tool to set and enable HPA for dynamic volume expansion and contraction.
Note: We also do not recommend configuring Controller HPA because the Controller in Kafka KRaft mode relies on the Raft protocol to maintain metadata consistency and does not support automated Raft member changes. Therefore, configuring Controller HPA may cause Quorum failure or cluster unavailability.

Performance optimization

AutoMQ also supports setting additional properties parameters and ENV for performance optimization. For details, see: Performance Tuning for Broker