Deploy to Baidu Cloud CCE
Refer to Overview▸, AutoMQ supports deployment on Kubernetes. This article outlines the installation process for deploying AutoMQ on the Baidu Cloud CCE platform.
In this article, references to AutoMQ Product Service Provider, AutoMQ Service Provider, and AutoMQ specifically refer to AutoMQ HK Limited and its subsidiaries.
Operation Process
Step 1: Install Control Console
Refer to Overview▸, AutoMQ supports deployment to a CCE cluster. In the CCE deployment mode, it is necessary to first install the AutoMQ console and then manage the CCE through the console interface to deploy the data plane cluster to CCE.
Note:
To deploy an AutoMQ data plane cluster on CCE, you need to pull Docker images and Helm Chart artifacts from the public internet. Therefore, the VPC environment must support external internet access via SNAT or other methods.
On Baidu Cloud, you need to install the AutoMQ console directly using the BCC image. The steps to install the AutoMQ console are as follows:
-
The AutoMQ console accesses cloud resources through IAM role authorization. Therefore, before installing the console, permission to access cloud resources needs to be granted to the AutoMQ console. Follow these steps: go to the IAM Console and follow the steps below:
-
Create a custom authorization policy using the policy content referenced in the document below.
-
Create a custom IAM role and choose the authorization type as Cloud Product > BCC Instance.
-
Attach the authorization policy to the IAM role.
-
{
"id": "policy_8dc24a2639514662a4cc129f47dcb310",
"version": "v2",
"accessControlList": [
{
"service": "bce:cce",
"region": "bj",
"resource": [
"*"
],
"effect": "Allow",
"permission": [
"READ"
]
},
{
"service": "bce:ld",
"region": "bj",
"resource": [
"*"
],
"effect": "Allow",
"permission": [
"READ",
"OPERATE",
"FULL_CONTROL"
]
},
{
"service": "bce:network",
"region": "bj",
"resource": [
"*"
],
"effect": "Allow",
"permission": [
"READ",
"OPERATE",
"FULL_CONTROL"
]
},
{
"service": "bce:bos",
"region": "*",
"resource": [
"*"
],
"effect": "Allow",
"permission": [
"ListBuckets"
]
},
{
"service": "bce:bos",
"region": "*",
"resource": [
"*"
],
"effect": "Allow",
"permission": [
"READ",
"FULL_CONTROL",
"LIST"
]
}
]
}
-
Go to the BCC Console and click Create Instance. Configure the following parameters as suggested to complete the creation of the BCC instance.
-
Region and Availability Zone: Select the region and availability zone where the data plane cluster will be deployed in the future to ensure VPC internal network communication.
-
Instance Configuration: It is recommended to choose at least a 2-core 8GB instance specification.
-
Image Type: Select the custom image. Please contact AutoMQ technical personnel in advance to obtain the AutoMQ console host image.
-
Storage: In addition to the default system disk, you need to create an additional cloud disk of at least 40GB (general-purpose SSD recommended).
-
Private Network: Select the VPC where the data plane will be deployed in the future, i.e., the VPC where the AutoMQ-based application resides.
-
Security Group: Choose as needed, making sure to set the inbound rules to allow port 8080. The AutoMQ console exposes the web service through port 8080.
-
-
After the instance is created, select the BCC instance, click on More Actions on the right, and choose Set IAM Role. Bind the IAM role created in the previous step to the BCC instance where the AutoMQ console is located.
-
Log in to the Console to proceed to the next step of initialization configuration.
The initial username for the first access to the Baidu Cloud AutoMQ console is admin, and the initial password is the instance ID of the BCC instance. You must change the password immediately after the first login.
Step 2: Create a CCE Cluster
Refer to Overview▸, users need to pre-create an independent CCE cluster for AutoMQ use. Users can access the Baidu Cloud CCE product console and follow the steps outlined below.
- Log in to the Baidu Cloud CCE Console. Click Create Cluster, and select Standard Managed Cluster.

- Choose the billing model and version based on recommendations. For cluster size, it is advised to select at least L50 nodes.

Refer to the following requirements for network configuration settings:
-
Private Network: Select the VPC network where the application is located, ensuring it is consistent with the network where the console is located.
-
API Server Access: Choose to automatically create a load balancer.
-
API Server Subnet: Select the subnet planned for installation.
-
Container subnet: Select the subnet for deploying the AutoMQ instance. It is recommended to choose a single subnet or three subnets.

-
Create new nodes for deploying Kubernetes system components. When adding Worker nodes, pay attention to the following configurations:
-
Number of nodes: It is recommended to have at least 2 nodes.
-
Node specifications: It is recommended to use BCC instances with at least 4C8G specifications.
-

-
Click on Create Cluster and wait a few minutes for the creation to complete.
-
After the cluster is created, access the cluster details, navigate to Operations and Management, go to Component Management, and install the CCE CSI CDS Storage Plugin.

Step 3: Create IAM Role and Dedicated AutoMQ CCE Node Pool
Refer to Overview▸, users need to create a node pool exclusively for AutoMQ and set up the corresponding IAM roles for subsequent instance deployments. Follow the steps below to create IAM roles and compliant node pools.
- Go to the IAM Console, create an IAM role without granting any permissions for now.

- Access the details of the CCE cluster created in step 2, click on the Node Management Menu, then Create Node Group.

- Refer to the documentation below to set custom parameters and complete the creation of the node group. For parameters not specified in the table, please follow the recommended default values without modification.
When creating a node group, only single-zone or three-zone configurations are supported. If you select a different number of zones, instances cannot be created later.
Parameter Settings | Value Description |
---|---|
Node Group Name |
|
Node Configuration |
Warning: AutoMQ must run on a VM of the specified instance type. If you select a non-preset instance type when creating a node pool, you will not be able to use that node pool later. |
Role Name |
Note: When creating a dedicated node group for AutoMQ, it is recommended to create a new IAM role. Subsequently, assign the appropriate permissions for this role in the AutoMQ console. Reusing other IAM roles may lead to permission overflow. |
Taints |
|

- Bind IAM role information to the node pool. The role information comes from the created role name from the previous step. Additionally, add a taint to the node pool. The taint's key is dedicated, the value is automq, and the effect is NO_SCHEDULE.

-
Click to create the node group.
-
Create a Placeholder Deployment for the node group used by AutoMQ to accelerate Failover in node failure scenarios.
Working Principle:
The role of the Placeholder Deployment is to deploy a low-priority "placeholder" application on the nodes of the Kubernetes cluster, pre-occupying several nodes. When the node hosting the Pod of the AutoMQ cluster fails, the Placeholder node can be quickly preempted to enable rapid recovery.
Deploying a Placeholder Deployment can be done using either the kubectl command or the Kubernetes console.
First, click the link to download the priority declaration file named automq-low-priority.yaml
, and execute the following command to create the priority declaration.
kubectl apply -f automq-low-priority.yaml
Then, click the link to download the automq-cce-baidu-placeholder.yaml
file. You will need to modify the parameters based on the actual node pool deployment:
-
metadata.name
: It is recommended to change this to a meaningful Placeholder name, such asplaceholder-for-nodegroup-A
. -
replicas
: The reserved number for Placeholder pods; it defaults to 1. If deploying across multiple availability zones, it is recommended to reserve 1 machine per availability zone, i.e., set the number to the number of availability zones. -
affinity.nodeAffinity
: This is used to select nodes for Placeholder deployment. You need to adjust thematchExpressions
within thekey
andvalues
sections to precisely match AutoMQ's node pool. The sample YAML file provides two options for node selection.-
ccs.bj.baidubce.com/node-pool
: Utilize theccs.bj.baidubce.com/node-pool
label on CCE to filter the specified node pool. -
node.kubernetes.io/instance-type
: Leverage thenode.kubernetes.io/instance-type
label on CCE to filter for the specified node type.
-
-
resources
:-
The CPU and memory limits should align with the specific specifications of the Node Group, such as 2C16G.
-
CPU and memory requests should be slightly below the specific specifications of the Node Group, for instance, using a 3/4 ratio. This ensures that the Placeholder Pod can be scheduled on additional nodes, guaranteeing exclusive use and preventing unexpected occupation by other Pods in the cluster, which could lead to a scheduling failure during actual failovers due to insufficient resources.
-
Parameters to be Modified, Refer to the YAML File Below:
metadata:
# TODO: Replace with Your Custom Name
name: {Replace with your custom placeholder deployment name}
spec:
# TODO: Replace with Your Custom Node Nums
replicas: 1
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: ccs.bj.baidubce.com/node-pool
operator: In
values:
# TODO: Replace with Your Custom Node Pool Name
- "Replace with your custom Node Pool Name"
- key: node.kubernetes.io/instance-type
operator: In
values:
# TODO: Replace with Your Custom Node Pool VM Size
- "Replace with your custom Node Pool VM Size"
containers:
- name: placeholder
resources:
# TODO: Replace with Your Custom Memory and CPU Size
limits:
cpu: 2000m
memory: 16Gi
requests:
cpu: 1000m
memory: 12Gi
Once modifications are complete, execute the following command to install Placeholder.
kubectl apply -f automq-cce-baidu-placeholder.yaml
After execution, use the following command to check the status of the Placeholder Pod to ensure that its status is Running
, and verify whether it is scheduled to the desired node.
kubectl get pods -l app=low-priority-placeholder -o wide
Step 4: Access the Environment Console and Create the Deployment Configuration
The first time you enter the AutoMQ BYOC console, you need to create a deployment configuration to set up Kubernetes cluster information, bucket information, etc., before creating an instance.
- Copy the Cluster ID of the CCE cluster created in step 2.

-
Find and view the cluster credentials menu to obtain the Kubeconfig configuration file.
-
Log into the console, enter the Cluster ID and Kubeconfig configurations, and click Next.
-
Deployment Type: Select Kubernetes.
-
Kubernetes Cluster: Enter the cluster ID of the CCE cluster.
-
Kubeconfig: Fill in the content copied in the previous step.
-
DNS ZoneId: Enter the internal DNS ZoneId used to deploy AutoMQ.
-
Bucket Name: Specify the data bucket used to store messages for AutoMQ deployment, supporting the addition of multiple BOS buckets.
-

- After entering the cloud resource information, generate the required permissions for the data plane CCE node pool. Refer to the console guide, create authorization policy. Then, bind the authorization policy to the AutoMQ IAM role created in Step 3 and record the name of this node pool role, click next to preview.

- Preview the deployment configuration information and complete the creation. You can then proceed to the instance management page to create an instance.