Introduction
AutoMQ is a cloud-native streaming platform that is fully compatible with the Apache Kafka protocol. This topic describes that when deploying a multi-node AutoMQ cluster based on Kubernetes, it is possible to use Kubernetes application lifecycle management tools such as custom Helm Chart
, Operator
to flexibly transform and adapt relevant dependent resources and configurations, and finally complete the deployment in a complex environment.
Analysis of Differences between AutoMQ & Apache Kafka Docker Images
Init & Startup
- | Apache Kafka Open Source Image | AutoMQ Open Source Image |
---|---|---|
Startup Mode | encapsulates the default startup logic and is Out Of The Box | Compatible with the startup logic of the Apache Kafka Docker image, Out Of The Box |
Configuration Method |
|
|
Simple example | Apache Kafka Docker Hub | AutoMQ for Apache Kafka |
AutoMQ-specific configuration
AutoMQ employs a shared storage architecture and requires additional startup configuration parameters. It is important to note that all parameters detailed later in this chapter fall under these additional requirements and are mandatory for proper operation.S3 primary storage
Required shared storage parameters:Kraft
node.id
Among the many Kubernetes application lifecycle management tools, there is already a feature that dynamically injects node.id into server.properties based on the Kubernetes Statefulset ID prefix; however, additional necessary generation rules are still required:- Controller node.id needs to increment starting from 0 (min.id=0)
- Broker node.id needs to increment starting from 1000 (min.id=1000)
role
We recommend that Controller serve as both controller and broker roles simultaneouslyListener
AutoMQ is fully compatible with the Kafka Listener system and can be customized to setPLAINTEXT
, SASL
, TLS
, mTLS
protocols, with the default being PLAINTEXT
.
It is worth noting that if the inter-broker
protocol is not PLAINTEXT, you need to set the corresponding inter-broker authentication protocol (taking SASL_PLAINTEXT as an example) for the AutoBalancer
component of AutoMQ in the configuration:
AutoMQ-specific Env
Configure and use theAK/SK
with read and write permissions to access your Bucket:
The actual AK/SK can be stored in Kubernetes Secrets and referenced to ensure the security of AK/SK
No configuration required
AutoMQ, which builds its core architecture on object storage, has fundamental differences from traditional Kafka: object storage itself provides high durability and redundancy capabilities. Therefore, some replica-related parameters in traditional Kafka that are used to ensure data reliability, the following replica-related configurations are no longer needed or have no effect in AutoMQ:Resource request
Due to the difference in architecture, AutoMQ relies more on object storage compared to Kafka. In contrast, AutoMQ no longer requires local Data Log disk storage Therefore, when only Metadata PV
is retained for the Controller, all other PVCs
can be removed.
Observability
AutoMQ provides a complete set of methods for building an observable system. For details, see: AutoMQ ObservabilityDeployment Recommendations
When using your custom Helm Chart or Operator, it is best to first try using AutoMQ Labs for deployment experience, which demonstrates the unified configuration pass-through method by setting Env and reusing the Apache Kafka Docker Image. This will help you better use your custom Kubernetes application lifecycle management tool to deploy AutoMQ.Image
Specify the AutoMQ Docker Image for your main container ReferenceConfig & Env
We recommend uniformly using Env to set the above-mentioned specific dependency configuration for AutoMQ (the way to pass in the configuration of the Apache Kafka Docker Image), andusing KAFKA_
as the prefix to set the key-value pairs that we ultimately want to pass through to server.properties.
Other production-level recommendations
Production Deployment
Server Resource Configuration
We recommend that each Pod for resource deployment of AutoMQ be deployed exclusively and run on resources of 2Core16GB or higher, and appropriately adjust therequests
and limits
of the AutoMQ server (Statefulset) in Kubernetes.
Note: The specific resource configuration method depends on the capabilities and entry points provided by your Kubernetes application lifecycle management tool
Scheduling Policy
Through nodeAffinities
, tolerations
, podAntiAffinities
strategies, AutoMQ can implement fine-grained scheduling strategies in Kubernetes. We recommend that production-level AutoMQ be exclusive and non-mixed deployment.
Note: The specific way to set up scheduling depends on the capabilities and entry points provided by your Kubernetes application lifecycle management tool.
Replicas
Through the Kafka cluster of your Kubernetes application lifecycle management tool, it supports modifying configurations such as replicas
to achieve the purpose of horizontal volume expansion and contraction.
Note: However, we recommend that the cluster be deployed with 3 Controller Pods and several Broker Pods by default, and do not recommend dynamically adjusting the replicas of the Controller to avoid unexpected risks.
AutoScaling (HPA)
Use your Kubernetes application lifecycle management tool to set and enableHPA
for dynamic volume expansion and contraction.
Note: We also do not recommend configuring Controller HPA because the Controller in Kafka KRaft mode relies on the Raft protocol to maintain metadata consistency and does not support automated Raft member changes. Therefore, configuring Controller HPA may cause Quorum failure or cluster unavailability.