Skip to Main Content

Deploy AutoMQ Enterprise Via Helm Chart

This document provides instructions for deploying the AutoMQ Enterprise Edition software using Helm Charts within a Kubernetes environment in a private enterprise data center. If you wish to use AutoMQ in a public cloud environment, it is recommended to choose the fully managed AutoMQ Cloud service Overview▸.

Prerequisites

Before installing AutoMQ using Helm Charts, ensure the following conditions are met:

  1. Prepare Kubernetes Environment: Set up an available Kubernetes cluster beforehand, meeting the following requirements:

    1. Allocate AutoMQ Computing Resources: It is recommended to allocate resources of 4 cores and 16GB RAM for each AutoMQ Pod and to deploy on dedicated Nodes to ensure stable network throughput performance.

    2. Storage Plugin: If your Kubernetes is hosted by a cloud provider, it is advisable to install the storage plugin provided by the cloud vendor to manage EBS volume resources effectively.

    3. Network Plugin (Optional): If you plan to access AutoMQ externally from the Kubernetes cluster, you must install a Kubernetes network plugin to ensure that Pod IPs are externally accessible.

  2. Prepare Object Storage Bucket: Each AutoMQ cluster requires two separate object storage buckets. One is the Ops Bucket, used for storing system logs and metrics data, and the other is the Data Bucket for storing message data. Please refer to the object storage product documentation to create these buckets.

  3. Install Helm Chart Tool: It is recommended to install version 3.6 or higher. You can refer to the documentation for guidance on installation.

Obtain the Enterprise Edition Chart.

Before installing AutoMQ Enterprise Edition, we recommend contacting AutoMQ technical staff here to obtain the Helm Chart installation package.

Installing AutoMQ

AutoMQ Enterprise Edition provides two types of WAL (Write-Ahead Logging) storage: EBSWAL and S3WAL. A comparison between these storage engines is provided below, and it is recommended to choose based on your specific requirements. For detailed information on principles, please refer to the technical architecture.

  • EBSWAL Mode: This WAL storage utilizes high-speed EBS volumes, delivering <10 milliseconds of sending RT performance. It currently supports public cloud environments such as AWS, GCP, and Azure. When using EBSWAL, you need to allocate EBS volumes to AutoMQ's Pod through StorageClass.

  • S3WAL Mode: Deployment is relatively simple. This WAL storage writes directly to object storage, offering hundred-millisecond level sending RT performance. It supports all public cloud environments as well as private data centers that provide S3-compatible object storage, simplifying deployment without the need to allocate EBS volumes.

The example configuration file presented in the next section defaults to the S3WAL mode. If you wish to configure EBSWAL, please adjust the configuration parameters accordingly.

Step 1: Create Credentials and Assign Permissions

AutoMQ clusters require access to external services, such as object storage and storage volumes; thus, you must create credentials and assign permissions for AutoMQ prior to installation.

When deploying AutoMQ in the AWS public cloud environment and utilizing AWS S3 storage, you should establish an authorization policy within the IAM service. AutoMQ's access to AWS S3 needs to be granted the following operation permissions:


- actions:
- s3:GetLifecycleConfiguration
- s3:PutLifecycleConfiguration
- s3:ListBucket
- s3:PutObject
- s3:GetObject
- s3:AbortMultipartUpload
- s3:PutObjectTagging
- s3:DeleteObject

If deploying with EBSWAL mode, you need to authorize the following policy as well:


- actions:
- ec2:DescribeVolumes
- ec2:DetachVolume
- ec2:DescribeAvailabilityZones

After the user creates an IAM authorization policy, credentials can be created using the following two methods.

  • Using IAM sub-account static AccessKey: In this method, you need to attach the authorization policy to the IAM sub-account and use the sub-account's static AccessKeyId and AccessKeySecret as credentials to access AutoMQ.

  • Using IAM Role dynamic credentials: In this method, an IAM Role needs to be created, with the authorization policy attached to the Role. Then, by impersonating the EC2 Role on EKS, you can access AutoMQ using dynamic credentials.

To simplify the example, the following configuration file example uses a static AccessKey as credentials.

Step 2: Create Storage Class

Before installing AutoMQ, it is essential to declare a Storage Class in the Kubernetes cluster for the subsequent allocation of storage volumes. These storage volumes have the following purposes:

  • Storing AutoMQ Controller Metadata: In the AutoMQ cluster, the Controller Pod, which is used for metadata management, needs to mount storage volumes to store KRaft metadata.

  • EBSWAL Mode Storage for WAL Data (Optional): If you plan to deploy in EBSWAL mode, each Broker Pod will need to mount a data volume specifically for writing WAL data.

Declare the appropriate Storage Class according to your cloud provider or private data center's Kubernetes storage plugin, and then record the Storage Class name for later parameter configuration.


apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: automq-disk-eks-gp3
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
type: gp3 # EBS Volume Type
allowVolumeExpansion: true

Step 3: Initialize Configuration File

The configuration for the AutoMQ Enterprise Edition Chart is composed of multiple sections, allowing users to customize overrides through the values.yaml file. First, create an empty file named automq-values.yaml and fill it with the necessary configuration items.

Note:

The provided configuration options are based on the S3WAL mode, with parameter values initially set as placeholders. Before installation, these placeholders need to be replaced with actual values tailored to specific circumstances. You may also modify parameters to switch to the EBS WAL mode by referring to the parameter documentation.


global:
cloudProvider:
name: "Replace With Your True Cloud Provider Name"
credentials: "Replace With Your True Your Credentials"
config: |
s3.ops.buckets=Replace With Your True Ops Bucket URL
s3.data.buckets=Replace With Your True Data Bucket URL
s3.wal.path=Replace With Your True WAL PATH

controller:
resources:
requests:
cpu: "3000m"
memory: "12Gi"
limits:
cpu: "4000m"
memory: "16Gi"
persistence:
metadata:
storageClass: "Replace With Your True StroageClass"
wal:
enabled: false

annotations:

env:
- name: "KAFKA_HEAP_OPTS"
value: "-Xmx6g -Xms6g -XX:MaxDirectMemorySize=6g -XX:MetaspaceSize=96m"
- name: "KAFKA_S3_ACCESS_KEY"
value: "Replace With Your True ACCESS_KEY"
- name: "KAFKA_S3_SECRET_KEY"
value: "Replace With Your True SECRET_KEY"

broker:
replicas: 0
resources:
requests:
cpu: "3000m"
memory: "12Gi"
limits:
cpu: "4000m"
memory: "16Gi"

persistence:
wal:
enabled: false

annotations:

env:
- name: "KAFKA_HEAP_OPTS"
value: "-Xmx6g -Xms6g -XX:MaxDirectMemorySize=6g -XX:MetaspaceSize=96m"
- name: "KAFKA_S3_ACCESS_KEY"
value: "Replace With Your True ACCESS_KEY"
- name: "KAFKA_S3_SECRET_KEY"
value: "Replace With Your True SECRET_KEY"

In the configuration file generated in the previous step, some parameters need to be adjusted based on actual conditions.

Modify Common Parameters

global.cloudProvider.name

This parameter convention is designed for the deployment cloud environment. You should input the enumerated value corresponding to the cloud provider's name. If it's a private data center, you will still need to fill it in according to the enumerated values.

Deployment Environment
Parameter Enumerated Values
AWS
aws
Google Cloud
gcp
Azure
azure
Alibaba Cloud
aliyun
IDC
noop

global.cloudProvider.credentials

This parameter specifies the public Credentials parameter used by the AutoMQ cluster to access cloud resources. The current example uses static Credentials of the AccessKey type as an example. If you want to use the IAM Role method, please refer to the advanced parameter document description for modification.


global:
cloudProvider:
credentials: static://?accessKey=<your-accesskey>&secretKey=<your-secretkey>


KAFKA_S3_ACCESS_KEY and KAFKA_S3_SECRET_KEY Environment Variables

The sample configuration file uses static credentials of the AccessKey type. Thus, besides setting the global.cloudProvider.credentials parameter, you must update the environment variables for the Controller and Broker with the correct credentials.

Replace the credentials in the example with those generated in Step 1:


controller:
env:
- name: "KAFKA_S3_ACCESS_KEY"
value: "Replace With Your True ACCESS_KEY"
- name: "KAFKA_S3_SECRET_KEY"
value: "Replace With Your True SECRET_KEY"

broker:
env:
- name: "KAFKA_S3_ACCESS_KEY"
value: "Replace With Your True ACCESS_KEY"
- name: "KAFKA_S3_SECRET_KEY"
value: "Replace With Your True SECRET_KEY"

global.config

This parameter specifies the S3URL configuration for accessing object storage, consisting of three components: s3.ops.buckets, s3.data.buckets, and s3.wal.path.

Below is an example configuration using S3WAL mode with static credentials. To utilize EBSWAL mode, please see the advanced configuration section for modification guidelines.

Populate the Ops Bucket and Data Bucket created during the prerequisites with data relevant to your specific needs.


config: |
s3.data.buckets=0@s3://<your-data-bucket>?authType=static
s3.ops.buckets=1@s3://<your-ops-bucket>?authType=static
s3.wal.path=0@s3://<your-data-bucket>?authType=static

Modify Parameters like the Storage Class as Needed.

controller.persistence.metadata.storageClass

Replace this parameter with the name of the Storage Class created in Step 2 to configure the storage metadata for the AutoMQ Controller Pod.

Modify Cluster Topology and Resource Request Parameters

Adjust the cluster topology and resource request parameters based on the Node resources allocated to AutoMQ. The parameters to be modified are as follows:

broker.replicas

By default, the AutoMQ Enterprise Edition Chart will initiate three Controller Pods. The Controller Pods also provide data read and write capabilities. If the user intends to horizontally scale more Brokers, they can set the broker.replicas parameter.

  • Default value: 0, indicating a three-node cluster without additional Brokers.

  • Set range: >= 0, configure as needed.

Resource Request Parameters

The Pods for AutoMQ Enterprise Edition Controller and Broker need modifications to the Request and Limit parameters, including the corresponding JVM HEAP settings. The default configuration in the provided configuration file is based on a 4Core16GB specification. Please modify the following parameters according to the actual allocated computing resources as needed.

  • controller.resources.requests.cpu

  • controller.resources.requests.memory

  • controller.resources.limits.cpu

  • controller.resources.limits.memory

  • controller.env.[KAFKA_HEAP_OPTS]

  • broker.resources.requests.cpu

  • broker.resources.requests.memory

  • broker.resources.limits.cpu

  • broker.resources.limits.memory

  • broker.env.[KAFKA_HEAP_OPTS]

Step 4: Install the Chart and Access the Cluster

After adjusting the values.yaml configuration file according to actual deployment requirements, you can proceed to install AutoMQ.


helm upgrade --install <release-name> automq-enterprise-5.0.0.tgz -f <your-custom-values.yaml> --namespace <namespace> --create-namespace

After installation, users can access AutoMQ within the Kubernetes cluster using the Bootstrap address format provided by the AutoMQ cluster.


controller1_PodIP:9092,controller2_PodIP:9092,controller3_PodIP:9092,broker0_PodIP:9092...

Note: To access AutoMQ from outside the Kubernetes cluster, it is recommended to use a Kubernetes network plugin to enable Pod IP communication with external networks.

Additional Advanced Configurations

The deployment documentation above demonstrates a simple example of deploying AutoMQ using the S3WAL mode. In actual production environments, users can opt for advanced configurations such as EBSWAL or adding Auto-Scaler support. For a complete configuration file reference, see Helm Chart Values Readme▸.

Set the WAL Type

In the installation steps above, S3WAL is used as an example, and AutoMQ supports both EBSWAL and S3WAL deployment methods.

In S3WAL mode, there's no need to mount WAL data volumes, which simplifies configuration. First, set the global.config.s3.wal.path parameter.


config: |
s3.wal.path=0@s3://<xxx-data-bucket>?region=<region>&endpoint=<endpoint>

Then, disable controller.persistence.wal.enabled and broker.persistence.wal.enabled.


# Applying StorageClass in Controller/broker
controller:
persistence:
metadata:
storageClass: "automq-disk-azure-premium-ssd"
wal:
enabled: false
broker:
persistence:
wal:
enabled: false


Set Credentials

AutoMQ supports both static AccessKey and dynamic IAM Role for accessing external resources. In production environments, using dynamic IAM Role credentials is recommended to avoid potential leaks of static AccessKey configurations.

To use IAM Role credentials, attach the authorization policy to the Role as described in Step 1. Additionally, modify the credentials configuration according to the example provided below.


global:
cloudProvider:
credentials: instance://?role=<your-instance-profile>

config: |
s3.data.buckets=0@s3://<your-bucket>?authType=instance&role=<role-id>
s3.ops.buckets=1@s3://<your-bucket>?authType=instance&role=<role-id>

The format for filling in credentials is as follows:

Deployment Environment
Parameter Enumeration Values
AWS
instance://?role=<your-instance-profile>
Role should be filled with the IAM Instance Profile, not the Role ARN.
Google Cloud
instance://?role=<your-service-account-name>
Role should be filled with the ServiceAccount name of GCP.
Azure
instance://?role=<your-managed-identity-client-id>
Role should be filled with the Managed Identity Client ID of Azure.
Alibaba Cloud
instance://?role=<your-role-id>
Enter the RAM Role name for Alibaba Cloud in Role.
Tencent Cloud
instance://?role=<your-role-id>
Enter the CAM Role name for Tencent Cloud in Role.
Huawei Cloud
instance://?role=<your-delegate-id>
Enter the delegate ID for Huawei Cloud in Role.

Setting up Fine-grained Scheduling Policies

In Kubernetes, AutoMQ's fine-grained scheduling policies are executed using nodeAffinities and tolerations. It's recommended for users to customize label matching rules based on their node types:

Tolerations

You should add a taint in the Kubernetes node group with the key: "dedicated", operator: "Equal", value: "automq", and effect: "NoSchedule". Also, set up the necessary toleration rules in global.tolerations to schedule Pods:


global:
tolerations:
- key: "dedicated"
operator: "Equal"
value: "automq"
effect: "NoSchedule"

Node Affinities

Modify the default values in the controller/agent configurations to align with the node labels (e.g., node-type: automq-worker):


controller:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "node-type"
operator: In
values: ["automq-worker"]

Setting Elastic Scaling

Number of Controllers

By default, three Controller Pods are deployed in the cluster, and users can customize the number of Controller replicas.

Note: After the cluster is deployed, adjusting the number of Controller replicas is not currently supported to avoid unexpected risks.

Number of Brokers

The number of brokers is controlled by the broker.replicas parameter and can be scaled horizontally. By default, it is set to 0.

Auto-scaling Configuration

By default, HPA (Horizontal Pod Autoscaler) is disabled. To enable it, two conditions must be met:

  • broker.replicas > 0

  • Enable and configure parameters in global.autoscaling.hpa:


global:
autoscaling:
hpa:
enabled: true # Enable HPA
minReplicas: "1" # Minimum Replicas
maxReplicas: "3" # Maximum Replicas
targetCPU: "60" # Target CPU Utilization (%)
targetMemory: "" # Target Memory Utilization (%) (optional)

Identity Configuration

AutoMQ supports protocol listener overrides and offers secure authentication, with the following default ports exposed:

  • Client-to-server access: 9092 (PLAINTEXT).

  • Internal communication between Controllers: 9093 (PLAINTEXT).

  • Internal communication between Brokers: 9094 (PLAINTEXT).

AutoMQ also enables secure authentication by allowing custom port and protocol overrides via listener configurations, such as enabling SASL authentication (acceptable values are 'PLAINTEXT', 'SASL_PLAINTEXT', 'SASL_SSL', and 'SSL').


listeners:
client:
- containerPort: 9102
protocol: SASL_PLAINTEXT
name: BROKER_SASL
sslClientAuth: "" # Optional: Configure SSL Authentication Policy
controller:
- containerPort: 9103
protocol: SASL_PLAINTEXT
name: CONTROLLER_SASL
sslClientAuth: ""
interbroker:
- containerPort: 9104
protocol: SASL_PLAINTEXT
name: BROKER_SASL
sslClientAuth: ""