Skip to Main Content

Deploy to Azure AKS

Refer to Overview▸, AutoMQ supports deployment on Kubernetes. This document outlines the installation process for deploying AutoMQ on the Azure AKS platform.

The terms AutoMQ product service provider, AutoMQ service provider, and AutoMQ in this document specifically refer to AutoMQ HK Limited and its subsidiaries.

Operation Process

Step 1: Install Environment Console

Refer to Overview▸, AutoMQ is compatible with deployment on AKS clusters. In the AKS deployment mode, it is first necessary to install the AutoMQ Console, and then use the console interface to operate AKS and deploy the cluster to AKS.

On Azure, it's recommended to install the console using ARM templates. Refer to Install Env on Azure▸.

Step 2: Create Managed Identity

When deploying AutoMQ to AKS, the AutoMQ data plane cluster requires a dedicated node pool. This node pool must be bound to an independent Managed Identity to access cloud resources. Therefore, before creating the AKS node pool, you need to create a Managed Identity in advance and grant the necessary permissions for AutoMQ. The steps are as follows:

  1. Visit the Managed Identity Console. Click "Create." Choose the following parameters:
  • Resource Group: It is recommended to keep it consistent with the AutoMQ Console resource group.

  • Region: It is recommended that the target region for deploying AutoMQ remains consistent.

AutoMQ requires that cloud resources such as the environment console, AKS, Storage Account, Private DNS Zone, etc., be located within the same Resource Group. If your Resource Group is not consistent, additional authorization to the AutoMQ console will be necessary.

  1. Click Next, Create Managed Identity. Once created, click on the Managed Identity, navigate to Overview, and record the Client ID, which will be needed when creating the deployment configuration in step 4.

Step 3: Create an AKS Cluster

Refer to Overview▸. Users should pre-create a standalone AKS cluster designated for AutoMQ use. Users can visit the Azure AKS product console to follow the steps below.

  1. Log in to the Azure AKS Console. Click Create Cluster.
  1. Configure Basic Cluster Information, focusing on the specified configuration items, while leaving other options at their default settings.

    1. Region: Choose the correct region.

    2. Resource Group: Select the appropriate resource group, preferably the same as the console, Storage Account, DNS, etc.

    3. Availability Zone: Choose 3 availability zones.

    4. AKS Pricing Tier: Choose Standard.

    5. Authentication and Authorization: Select Local account with Kubernetes RBAC.

  1. Configure Node Pool: Set up a dedicated node pool for AutoMQ (you can modify the default generated node pool or create a new separate node pool). Refer to the reference values below for configuration items to modify, and it is recommended to keep other configurations unchanged.

    1. Mode: Select User mode.

    2. Availability zones: It is recommended to choose at least 3 availability zones.

    3. Node size: Refer to Overview▸ and select the Standard D4as v5 instance type.

    4. Scale method: Choose the Autoscale mode to automatically scale nodes based on deployment needs.

    5. Minimum node count: It is recommended to select at least 3 nodes.

    6. Max pods per node: It is advised to set this to 30.

    7. Taints: Add a taint configuration with the key as dedicated, value as automq, and effect as NoSchedule.

  1. Configure the Network. Choose the target Virtual Network and Subnet.

    1. Network Configuration: Ensure that you select the Azure CNI Node Subnet mode so that Pods can directly use VNet IPs.

    2. Bring your own Azure Virtual Network: Make sure this option is checked and enabled.

    3. Virtual Network: Select the private network where AutoMQ should be deployed.

    4. Cluster subnet: Pick the subnet where AutoMQ should be deployed.

  1. Maintain the default configurations and create an AKS cluster.

  2. Go to the VMSS Console. Find the VMSS associated with the AutoMQ dedicated node pool, and associate the Managed Identity created in step 2.

The Pods of AutoMQ data plane components access Azure cloud resources by utilizing the Managed Identity linked to the AKS Node.

It is advisable to use the node pool name and resource group to filter and locate the appropriate VMSS. The VMSS naming format is aks-{NodePoolName}-xxxx-vmss.

Once the correct VMSS is located, click on the Security >> Identity page, and associate the previously created Managed Identity.

  1. Assign the Network Contributor role to the AKS cluster. Navigate to the Virtual Network console, find the network where the current AKS is located, click on the IAM option, and add a role assignment.

Deploying AutoMQ to AKS requires a Load Balancer to be used as the Service IP. To create a Load Balancer, AKS must be granted the Network Contributor role.

During authorization, first select the Network Contributor role, then search for System-assigned managed identity, and choose the managed identity used by the current AKS cluster.

  1. Create a Placeholder Deployment for the node pool used by AutoMQ to accelerate failover speed in node failure scenarios.

Working principle:

Placeholder Deployment functions deploy a low-priority "placeholder" application on the nodes of a Kubernetes cluster, occupying several nodes. When nodes hosting Pods of the AutoMQ cluster experience failures, these Placeholder nodes can be quickly reclaimed for rapid recovery.

Placeholder Deployment can be managed using the kubectl command or through the Kubernetes console.

First, click the link to download the priority declaration file named automq-low-priority.yaml, and execute the following command to create a priority declaration.


kubectl apply -f automq-low-priority.yaml

Then, click the link to download the automq-aks-placeholder.yaml file. You may need to adjust the parameters within based on the actual node pool deployment:

  • metadata.name: It is recommended to modify this to a meaningful Placeholder name, such as placeholder-for-nodegroup-A.

  • replicas : This represents the number of placeholder pods reserved, with a default setting of 1. When deploying across multiple availability zones, it's recommended to maintain 1 machine in each zone, setting the number equal to the available zones count.

  • affinity.nodeAffinity : This is used to select nodes for Placeholder deployment. Adjust the matchExpressions within key and values to accurately match the AutoMQ node pool. The example YAML file provides 2 node selection options.

    • kubernetes.azure.com/agentpool : Utilize the kubernetes.azure.com/agentpool tag to filter specific node pools within Azure.

    • node.kubernetes.io/instance-type : Use the node.kubernetes.io/instance-type tag to filter specific node models on Azure.

  • resources :

    • CPU/memory limits should match the specific Node Group specifications, such as 2C16G.

    • The CPU/memory requests are slightly below the specific specifications of the Node Group, for example, an actual usage ratio of 3/4. This ensures the Placeholder Pod can be scheduled to an additional Node, achieving exclusive occupancy. This avoids unexpected usage by other pods in the cluster, which could lead to scheduling failure due to insufficient resources during an actual Failover.

The parameter segment that needs modification can be found in the following YAML file:


metadata:
# TODO: Replace with Your Custom Name
name: {Replace with your custom placeholder deployment name}
spec:
# TODO: Replace with Your Custom Node Nums
replicas: 1
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.azure.com/agentpool
operator: In
values:
# TODO: Replace with Your Custom Node Pool Name
- "Replace with your custom Node Pool Name"
- key: node.kubernetes.io/instance-type
operator: In
values:
# TODO: Replace with Your Custom Node Pool VM Size
- "Replace with your custom Node Pool VM Size"
containers:
- name: placeholder
resources:
# TODO: Replace with Your Custom Memory and CPU Size
limits:
cpu: 2000m
memory: 16Gi
requests:
cpu: 1000m
memory: 12Gi


After modification, execute the following command to install the Placeholder.


kubectl apply -f automq-aks-placeholder.yaml

Once executed, use the following command to check the status of the Placeholder Pod to ensure it is in the Running state and verify whether it has been scheduled to the expected node.


kubectl get pods -l app=low-priority-placeholder -o wide

Step 4: Enter the Environment Console to Create a Deployment Configuration.

When first accessing the AutoMQ BYOC console, you need to prepare resources such as an object storage bucket and private DNS, create deployment configuration, and set Kubernetes cluster information and authorization for proper use.

  1. Go to the Storage Account console and create a Blob Container to save data in the Data Bucket. Ensure that the region of the Storage Account aligns with the AutoMQ console and AKS, etc.

After creation, record the current container name and Blob Service Endpoint.

  1. Navigate to the Private DNS Zone product console and create a Private Zone for the subsequent access point resolution of AutoMQ instances. Ensure that the Zone's region aligns with the AutoMQ console and AKS, etc.

Remember to select Virtual Network Links and associate the Virtual Network of AutoMQ and AKS with the current Zone.

  1. Log in to the console, enter the cluster name and other configurations, then click next.

    1. Deployment Type: Select Kubernetes.

    2. Kubernetes Cluster: Enter the cluster name for the AKS (Azure Kubernetes Service) cluster.

    3. AKS Resource Group: Enter the resource group associated with the AKS cluster.

    4. DNS ZoneId: Enter the ZoneId of the private DNS zone used for deploying AutoMQ.

    5. DNS Resource Group: Specify the Resource Group for the Private DNS Zone being utilized to deploy AutoMQ.

    6. Bucket Name: Enter the data Bucket designated for storing messages during the AutoMQ deployment. Support for multiple Buckets is available.

    7. Bucket Endpoint: Provide the data Bucket endpoint intended for storing messages during the AutoMQ deployment. Support for multiple Buckets is available.

  1. Once you have entered the cloud resource information, refer to the console guidance to authorize the Managed Identity used by the data-plane node pool (i.e., the Managed Identity ClientID created in step 2). The authorization process is as follows:

Navigate to Managed Identity console, locate the Managed Identity from step 2, click on the IAM option, and enter the Role assignments menu. Grant this Managed Identity Contributor role permissions.

Select the previously created Managed Identity to complete authorization.

Go to the Storage Account Console, click on the IAM option, and proceed to grant the current Managed Identity the Storage Blob Data Contributor role.

After completing the authorization, enter the ClientID of the current Managed Identity and click Next to preview the creation.

  1. Preview the deployment configuration information and complete the creation. You can then go to the instance management page to create an instance.