Prerequisites
Before installing AutoMQ with a Helm Chart, ensure the following prerequisites are satisfied:-
Prepare a Kubernetes Environment: Establish an available Kubernetes cluster in advance, ensuring it meets the conditions below:
- Allocate Resources for AutoMQ: It is recommended to allocate 4 cores and 16GB of memory for each AutoMQ Pod. Deploying on a dedicated Node is advisable for stable network throughput performance.
- Storage Plugin: If your Kubernetes is provided by a cloud vendor, it is advisable to install the storage plugin offered by the vendor to manage EBS volume resources effectively.
- Prepare Object Storage Buckets: Each AutoMQ cluster requires two separate object storage buckets: one Ops Bucket for system logs and metrics data, and one Data Bucket for message data. Please refer to the object storage product documentation for guidance on creating them.
- Install the Helm Chart Tool: It is recommended to install version 3.6 or higher. You can refer to the documentation for detailed instructions.
Obtain the Enterprise Edition Chart.
The AutoMQ Enterprise Edition Chart image is published and made available to the public through an Azure Container Registry (East US). You can test the pull with the following command.Install AutoMQ
AutoMQ Enterprise Edition offers two types of WAL storage options: EBSWAL and S3WAL. A comparison of the two storage engines is as follows; it is recommended to choose based on your needs. For detailed principles, please refer to the Technical Architecture.- EBSWAL Mode: WAL storage uses high-speed EBS volumes, providing <10 ms of send RT performance, currently supported only in public cloud environments like AWS, GCP, and Azure. When using, you need to assign EBS volumes to AutoMQ’s Pods via a StorageClass.
- S3WAL Mode: Deployment is relatively simple, as WAL storage writes directly to object storage, offering sub-100 ms send RT performance. It supports all public cloud environments as well as private data centers (as long as they provide S3-compatible object storage). Deployment is relatively straightforward, with no need to allocate EBS volumes.
The following section provides a configuration file example, with the default mode set to S3WAL. If you wish to configure EBSWAL, please modify these configuration parameters accordingly.
Step 1: Create Credentials and Perform Authorization.
AutoMQ clusters require access to external services such as object storage and storage volumes. Therefore, before installation, you need to create credentials for AutoMQ and complete the authorization process.If AutoMQ is deployed in the AWS public cloud environment using AWS S3 storage, you must access the IAM product to create an authorization policy. AutoMQ must be granted permission for the following operations to access AWS S3:If you deploy using the EBSWAL mode, additional authorization for the following policy is required:After creating an IAM authorization policy, credentials can be generated using two methods.
- Using IAM Subaccount Static AccessKey: In this approach, attach the authorization policy to the IAM subaccount and utilize the subaccount’s static AccessKeyId and AccessKeySecret as credentials to access AutoMQ.
- Using IAM Role Dynamic Credentials: For this approach, create an IAM Role and attach the authorization policy to the Role. Dynamic credentials allow access to AutoMQ through a Pod assuming the EC2 Role in EKS.
To clarify illustrations, the following configuration file example employs a static AccessKey as credentials.
Step 2: Create Storage Class
Before installing AutoMQ, you must declare a Storage Class in the Kubernetes cluster for allocating storage volumes. These storage volumes serve several purposes:- Storing AutoMQ Controller Metadata: In the AutoMQ cluster, the Controller Pod responsible for metadata management must mount the storage volumes to store KRaft metadata.
- EBSWAL Mode Storage for WAL Data (Optional): If you plan to deploy using the EBSWAL mode, each Broker Pod will also require a mounted data volume for writing WAL data.
Please specify the Storage Class based on the Kubernetes storage plugin from your cloud provider or private data center, then keep a record of the Storage Class name for later parameter configuration.
Step 3: Initialize the Configuration File
The configuration information for the AutoMQ Enterprise Edition Chart is composed of multiple parts, allowing for user customization via the values.yaml file. First, create an empty file namedautomq-values.yaml
. Edit this file to add specific parameters. You can refer to the demo-values.yaml for recommended configuration samples in the directory, and for more details, consult README.md.
Note:The following configuration items are based on the S3WAL mode; parameter values are filled using placeholders. Before installation, these values need to be set based on the actual situation. You may also adjust them for the EBS WAL mode as per the parameter documentation.
Modify Common Parameters.
global.cloudProvider.name This parameter specifies the deployment cloud environment. Please insert the enumerated value according to the name of the cloud provider. If it is a private data center, you’ll also need to fill it in with the enumerated value.Deployment Environment | Parameter Enumerated Value |
---|---|
AWS | aws |
Google Cloud | gcp |
Azure | azure |
Alibaba Cloud | aliyun |
Below is an example configuration using the S3WAL mode with static credentials. If you need to use EBSWAL mode, please refer to the advanced configuration section for modification instructions.
Fill in the Ops Bucket and Data Bucket created in the prerequisites according to your actual scenario.
Adjust Parameters like the Storage Class.
controller.persistence.metadata.storageClass Substitute this parameter with the name of the Storage Class created in step 2, which is designated for storing metadata in the AutoMQ Controller Pod.Revise the Cluster Topology and Resource Request Parameters.
Adjust the cluster topology and resource request parameters based on the resources allocated to AutoMQ Node. The parameters that need modification are as follows: broker.replicas The AutoMQ Enterprise Chart will start with three Controller Pods by default. These Controller Pods also provide data read and write capabilities. If users wish to horizontally scale more Brokers, they can set the broker.replicas parameter.- Default value: 0, which represents a three-node cluster without the need for additional Brokers.
- Setting range: >= 0, configured as needed.
- controller.resources.requests.cpu
- controller.resources.requests.memory
- controller.resources.limits.cpu
- controller.resources.limits.memory
- controller.env.[KAFKA_HEAP_OPTS]
- broker.resources.requests.cpu
- broker.resources.requests.memory
- broker.resources.limits.cpu
- broker.resources.limits.memory
- broker.env.[KAFKA_HEAP_OPTS]
Step 4: Install Chart and Access the Cluster
After customizing the values.yaml configuration file to suit your deployment requirements, proceed with the installation of AutoMQ.Note: We recommend deploying an Internal LoadBalancer to prevent changes in Pod IP addresses.
Step 5: Connect and Test the Cluster
Headless Service
- Locate the Headless service
- Connecting and Testing Using Kafka Clients
--bootstrap-server
option to send and receive messages. Here’s the command you can use:
LoadBalancer
- Find External Address
- Connect and Test Using Kafka Clients
9092
is used for client access.
Other Advanced Configurations
The deployment document above provides a basic example of deploying AutoMQ in S3WAL mode. In real-world production environments, users can choose more advanced configurations like EBSWAL and integrate Auto-Scaler support. For the full configuration file, refer to Helm Chart Values Readme▸.Configuring the WAL Type
In the previously mentioned installation steps, S3WAL was used as an example. AutoMQ supports deployment options for both EBSWAL and S3WAL modes.In S3WAL mode, there’s no need to mount a WAL data volume, making the configuration relatively straightforward. First, configure the Then, disable
global.config.s3.wal.path
parameter.controller.persistence.wal.enabled
and broker.persistence.wal.enabled
.Setting Credentials
AutoMQ supports accessing external resources using either static AccessKeys or dynamic IAM Roles. To prevent the leakage of static AccessKey configurations in production environments, it is recommended to use dynamically generated credentials provided by the cloud provider’s IAM Roles.When using IAM Role Credentials, it is necessary to attach the authorization policy to the Role in Step 1. Then, refer to the example below to modify the Credentials configuration.The format for filling out credentials parameters is outlined in the following table:**
Deployment Environment | Parameter Values |
---|---|
AWS | instance://?role=<your-instance-profile> For Role, enter the IAM instance profile, not the Role ARN. |
Google Cloud | instance://?role=<your-service-account-name> For Role, enter the name of the GCP ServiceAccount. |
Azure | instance://?role=<your-managed-identity-client-id> For Role, enter the Azure Managed Identity Client ID. |
Alibaba Cloud | instance://?role=<your-role-id> For Role, enter the RAM Role name of Alibaba Cloud. |
Set Fine-grained Scheduling Policies
In Kubernetes, AutoMQ’s fine-grained scheduling policy is implemented using node affinities and tolerations. Users are advised to customize label matching rules based on their node types:Tolerations
It’s recommended to add a taint to the Kubernetes node group with the key “dedicated,” operator “Equal,” value “automq,” and effect “NoSchedule.” Then, configure the corresponding toleration rules in global.tolerations to schedule Pods:Node Affinities
Override default values in the controller/agent configuration to align with node labels (e.g., node-type: automq-worker):Set up Auto-scaling
Number of Controllers
By default, the cluster deploys 3 Controller Pods, but users can customize the number of Controller replicas.Note: Once the cluster is deployed, adjusting the number of Controller replicas is not supported to avoid unforeseen risks.
Number of Brokers
The number of brokers is managed by thebroker.replicas
parameter, which allows for horizontal scaling. By default, there are 0 brokers.
Auto-scaling Configuration
By default, HPA (Horizontal Pod Autoscaler) is disabled. To activate it, two conditions must be fulfilled:- broker.replicas > 0
-
Enable and configure parameters in
global.autoscaling.hpa
:
Identity Recognition Configuration
AutoMQ allows overriding of protocol listeners and enabling secure authentication. By default, it uses the following ports:- Client to server access: 9092 (PLAINTEXT).
- Internal communication between Controllers: 9093 (PLAINTEXT).
- Internal communication between Brokers: 9094 (PLAINTEXT).
Additionally, you can set a password for it, which is randomly generated by default.