Skip to Main Content

Deploy to HuaWei Cloud CCE

Refer to Overview▸, AutoMQ supports deployment on Kubernetes. This article describes the installation process for deploying AutoMQ on the Huawei Cloud CCE platform.

In this article, AutoMQ Product Service Provider, AutoMQ Service Provider, and AutoMQ specifically refer to AutoMQ HK Limited and its subsidiaries.

Procedure

Step 1: Install Environment Console

Refer to Overview▸, AutoMQ supports deployment in CCE clusters. In the CCE deployment mode, you first need to install the AutoMQ console and then manage CCE via the console interface to deploy the cluster to CCE.

On Huawei Cloud, the installation of the environment console is supported through the Marketplace.

Note:

To deploy the AutoMQ data plane cluster on CCE, it's necessary to pull Docker images and Helm Chart products from a public network environment. Therefore, VPC environment support is needed to access the public network via SNAT or other methods. For instructions on configuring public SNAT, please refer to the Appendix section of the Install Env via Huawei Marketplace▸ document.

After the AutoMQ console is installed, you need to obtain the console address, initial username, and password from either the console interface or the Terraform output menu.

Step 2: Create CCE Cluster

Refer to Overview▸, users need to create an independent CCE cluster in advance for AutoMQ usage. Users can access the Huawei Cloud CCE product console and follow the steps below.

  1. Log in to the Huawei Cloud CCE Console. Click Purchase Cluster.
  1. Select the cluster type as CCE Turbo and follow the recommendations for the billing model and version. The suggested cluster size is 200-1000 nodes.

The network configuration needs to be set according to the following requirements:

  • Node subnet: Select a subnet with a sufficient IP range, recommended not less than /20, to avoid the inability to create machines later.

  • Container subnet and service subnet: Similarly, choose a subnet with sufficient IPs, recommended not less than /20, to avoid the inability to create Pods later.

  • Service forwarding mode: Make sure to select the IPVS mode.

Note:

When creating a CCE cluster, it is recommended to deselect the "Observability and local domain name resolution acceleration plugins". This is to avoid excessive consumption of node resources that may cause elastic scaling anomalies.

  1. Click on Create Cluster and wait a few minutes for the creation to complete.

  2. Once the cluster is created, go to the cluster details, and in the Plugin Center, install the CCE Cluster Elastic Engine Plugin.

Note:

During the elastic plugin deployment, select "Small Scale". This prevents the elastic scaling components of CCE from occupying too many node resources, which could lead to installation failure.

  1. Go to the cluster's Configuration Center, Network Configuration Tab, and enable Pod Access Metadata. Confirm and submit.
  1. Go to the cluster's Configuration Center, Cluster Auto-scaling Tab, enable Elastic Shrinkage, and check Ignore CPU and Memory Pre-allocation for DaemonSet Containers. Confirm and submit.

Step 3: Create a Public Node Pool for the CCE Cluster

Refer to Overview▸. Users need to create a public node pool for CCE system components to deploy these system components. Follow the steps below to create a node pool that meets these requirements.

  1. Enter the CCE cluster details created in Step 1, click the Node Management menu, and Create Node Pool. It is recommended to select at least 2 nodes with 4c8G models for deploying the CCE system components.

Step 4: Create IAM Delegation and AutoMQ Dedicated CCE Node Pool

Refer to Overview▸. Users need to create a dedicated node pool for AutoMQ along with the corresponding IAM delegation for subsequent instance deployment. Follow the steps below to create the necessary IAM delegation and node pool.

  1. Go to the IAM console and create an IAM role without granting permissions for now.
  1. Navigate to the details of the CCE cluster created in step 1, click on the Node Management Menu, and select Create Node Pool.
  1. Follow the documentation below to set custom parameters and complete the creation of the node pool. Do not modify any parameters not explained in the table; use the default recommended values.

When creating a node pool, only single availability zone or three availability zones are supported. If a different number of availability zones is selected, instances cannot be created later.

Parameter Settings
Value Description
Node Pool Name
  • Description: Enter a distinguishable name according to the business semantics.
Node Type
  • Description: Specify the model of the node pool, please refer to the document Overview▸. Fill in the model.

Warning: AutoMQ must run on a VM of the specified model. If a non-predefined model is selected when creating the node pool, the node pool cannot be used subsequently.

Availability Zone
  • Explanation: Choose one or three availability zones according to the actual needs of the AutoMQ cluster.

Note:
AutoMQ requires that the availability zones for the subsequent cluster creation and node pool creation must be exactly the same. Therefore, if you need to create a single availability zone AutoMQ cluster, select a single availability zone node pool here; if you need to create a three availability zone AutoMQ cluster, select a three availability zone node pool here. The two cannot be mixed.

Delegated Name
  • Explanation: AutoMQ clusters need access to OBS, network, and other cloud services. Therefore, the node pool needs the authorization to perform the above operations. Users need to create a separate IAM delegation and select the IAM delegation created earlier.

Note:
When creating an AutoMQ-exclusive node pool, it is recommended to create a new IAM delegation. Subsequently, assign corresponding operational permissions to this delegation in the AutoMQ console. Reusing other IAM delegations may cause permission overflow.

Taint
  • Description: To prevent other workloads in the Kubernetes cluster from occupying the resources of the dedicated AutoMQ nodes, taints need to be applied to the dedicated AutoMQ node pool.
  • The taint key is dedicated, the value is automq, and the effect is NO_SCHEDULE.
  1. Bind the delegation information to the node pool. The delegation is the IAM delegation created in the previous step. Also, add a taint to the node pool, the key of the taint is dedicated, the value is automq, and the effect is NO_SCHEDULE.
  1. After the node pool is created, click Elastic Scaling, enable the elastic scaling rule for specified availability zones (optional, if not configured, you need to manually add nodes).

When setting the elastic scaling rules for the node pool, ensure the following two configurations are correct:

Number of nodes range: It is recommended to retain at least 1 node. The range should be reasonably assessed based on the subsequent AutoMQ cluster scale. If the setting is too small, it will result in insufficient nodes for deployment.

Specification selection: Make sure to enable all machine types that meet the conditions in all available zones.

  1. Click the Scaling Menu of the node pool to expand the initial node capacity of the initial node pool. It is recommended to expand by 1 node for each availability zone.
  1. Create a Placeholder Deployment for the node group used by AutoMQ to accelerate the Failover process in node failure scenarios.

Working Principle:

The purpose of Placeholder Deployment is to deploy a low-priority "placeholder" application on the nodes of a Kubernetes cluster to preemptively occupy certain nodes. This way, if the node hosting the AutoMQ cluster's Pod experiences a failure, it can quickly take over the node being used by the Placeholder, facilitating rapid recovery.

You can deploy Placeholder Deployment using the kubectl command or through the Kubernetes console.

First, click the link to download the priority declaration file named automq-low-priority.yaml, and execute the following command to create the priority declaration.


kubectl apply -f automq-low-priority.yaml

Then, click the link to download the automq-cce-huawei-placeholder.yaml file. You need to modify the parameters according to the actual deployment node pool:

  • metadata.name: It is recommended to change it to a meaningful Placeholder name, such as placeholder-for-nodegroup-A.

  • replicas : The number of reserved Placeholder pods, with the default set to 1. If deploying across multiple availability zones, it's advisable to reserve 1 machine per availability zone, meaning the number should match the number of availability zones.

  • affinity.nodeAffinity : This is used to select the nodes where Placeholder will be deployed. You must modify the matchExpressions by updating the key and values to precisely match the AutoMQ node pool. The example YAML file provides two options for node filtering.

    • node.kubernetes.io/pool-name : Apply the node.kubernetes.io/pool-name label on CCE to filter specific node pools.

    • node.kubernetes.io/instance-type : Apply the node.kubernetes.io/instance-type label on CCE to filter specific node types.

  • resources :

    • The CPU/memory limits should align with the Node Group specifications, such as 2C16G.

    • The CPU/memory requests are slightly less than the specific specifications of the Node Group, for example, using an actual ratio of 3/4. This ensures that the Placeholder Pod can be scheduled onto additional Nodes and guarantees exclusivity, preventing unexpected occupation by other Pods in the cluster that might lead to scheduling failures due to insufficient resources during an actual Failover.

The parameter snippet that needs modification is referenced in the YAML file below:


metadata:
# TODO: Replace with Your Custom Name
name: {Replace with your custom placeholder deployment name}
spec:
# TODO: Replace with Your Custom Node Nums
replicas: 1
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/pool-name
operator: In
values:
# TODO: Replace with Your Custom Node Pool Name
- "Replace with your custom Node Pool Name"
- key: node.kubernetes.io/instance-type
operator: In
values:
# TODO: Replace with Your Custom Node Pool VM Size
- "Replace with your custom Node Pool VM Size"
containers:
- name: placeholder
resources:
# TODO: Replace with Your Custom Memory and CPU Size
limits:
cpu: 2000m
memory: 16Gi
requests:
cpu: 1000m
memory: 12Gi


After making the modifications, execute the following command to install the Placeholder.


kubectl apply -f automq-cce-huawei-placeholder.yaml

Once executed, run the following command to check the status of the Placeholder Pod, ensuring that the Placeholder Pod status is Running. Also, observe whether it has been scheduled to the expected node.


kubectl get pods -l app=low-priority-placeholder -o wide

Step 5: Enter the Environment Console and Create a Deployment Configuration.

Upon first entering the AutoMQ BYOC console, you need to create a deployment configuration to set up information such as the Kubernetes cluster information and OBS Bucket. Only then can you create an instance.

  1. Copy the Cluster ID of the CCE cluster created in Step 2.
  1. Find the Kubectl configuration menu and obtain the Kubeconfig configuration file.

Click on Kubectl configuration, set it to intranet access, and download the Kubeconfig configuration file.

  1. Log in to the console, input the cluster ID and Kubeconfig configurations, and click Next.

    1. Cloud Account ID: Enter the current cloud account main account ID, which can be found on the My Credentials page of the Huawei Cloud console.

    2. Deployment Type: Select Kubernetes.

    3. Kubernetes Cluster: Enter the cluster ID of the CCE cluster.

    4. Kubeconfig: Enter the content copied in the previous step.

    5. DNS ZoneId: Enter the ZoneId of the Private DNS used for deploying AutoMQ.

    6. Bucket Name: Enter the data bucket used for storing messages when deploying AutoMQ. Multiple OBS buckets are supported.

Note:

The DNS Zone ID needs to be obtained from the browser address bar, and the Bucket must not duplicate any existing Ops Bucket.

  1. After filling in the cloud resource information, generate the permissions required for the data plane CCE node pool authorization. Refer to the console guide to create two custom authorization policies. Then bind these authorization policies to the AutoMQ IAM delegation created in step 4, and enter the name of the node pool delegation, click Next to preview.

Due to the constraints of Huawei Cloud's IAM product, the AutoMQ data plane cluster needs to be created separately with OBS permissions and ECS-related permissions attached.

  1. Preview the deployment configuration, complete the creation. You can then proceed to the instance management page to create an instance.