Skip to Main Content

Deploy to AWS EKS

Refer to Overview▸, AutoMQ supports deployment on Kubernetes. This article details the process of installing AutoMQ on the AWS EKS platform.

Info

In this article, the terms AutoMQ product service provider, AutoMQ service provider, and AutoMQ specifically refer to AutoMQ HK Limited and its affiliated companies.

Operation Process

Step 1: Install the Environment Console

Refer to Overview▸, AutoMQ supports deployment to EKS clusters. In the EKS deployment mode, you need to install the AutoMQ console first and then use the console interface to operate EKS, deploying the cluster to EKS.

On AWS, support for installing the environment console is available both through Marketplace and Terraform.

Tip

Note:

When installing the environment console as mentioned above, Install Env from Marketplace▸ the cluster deployment type must be set to Kubernetes. This ensures that steps 2-4, which involve installing the AutoMQ cluster on EKS, can be supported.

After the AutoMQ console installation is completed, you need to obtain the environment console address, initial username, and password from either the console interface or the Terraform output menu.

Step 2: Create EKS Cluster IAM Role and AutoMQ Node Pool IAM Role

Each AWS EKS cluster requires an IAM Role to authorize access to AWS cloud resources for users. Therefore, before creating the EKS cluster, you need to create a dedicated IAM Role for the EKS cluster.

  1. Access the IAM console and click on Create Role. Select the following parameters:
  • Trusted entity type: choose AWS Service.

  • Service Use Case: Select EKS and EKS-Cluster.

  1. Click "Next", enter a custom role name, and create the IAM Role. This role is used for creating the EKS cluster.

Similarly, the AutoMQ data plane cluster requires an independent node pool, and this node pool needs a separate IAM Role to access cloud resources for permission validation. Therefore, before creating the EKS node pool, you need to go to the IAM console and create a custom role. The steps are as follows:

  1. Access the IAM console and click on Create Role. Select the following parameters:
  • Trusted entity type: choose AWS Service.

  • Service Use Case: Choose EC2.

  1. Click Next to add the necessary system permissions for the EKS node pool.

    1. AmazonEC2ContainerRegistryReadOnly

    2. AmazonEKS_CNI_Policy

    3. AmazonEKSWorkerNodePolicy

  2. Click Next, enter the custom role name, and Create IAM Role.

Step 3: Create EKS Cluster

Refer to Overview▸. Users need to create an independent EKS cluster in advance for allocation to AutoMQ. Users can follow the steps in the AWS EKS product console.

  1. Log in to the AWS EKS console. Click Create cluster.
  1. Configure Basic Cluster Information, focus on the configuration items listed below, while keeping other settings as default.

    1. Select Custom configuration mode.

    2. Disable EKS Auto Mode.

    3. Attach the IAM role created in step 1 to the EKS cluster.

  1. Configure VPC Network. Select the target VPC and subnet.
Tip

It is recommended to choose the VPC default security group and select all required private subnets for deploying the cluster.

  1. Keep other default configurations and create an EKS cluster.
  1. After the EKS cluster is created, you need to add the security group where the AutoMQ console resides to the inbound rules of the EKS cluster security group. This allows the AutoMQ console to call and access the EKS cluster.

Edit the inbound rules to add the security group where the AutoMQ console resides to the inbound rules, and select all traffic for the protocol.

Step 4: Create an EKS Public Node Group

Refer to Overview▸. Users need to create a public node group for the EKS cluster to deploy EKS system components. Follow the steps below to create a compliant node group.

  1. Go to the details of the EKS cluster created in Step 3, click on the Compute menu, and Create Node Group.
  1. Select the Node Group IAM Role, refer to the screenshot below, select the IAM Role (you can reuse the IAMRoleforAutoMQDedicatedNodeGroup created in the AutoMQ console, or create the Role as recommended by EKS).
  1. Select the instance type, number, and zone-aware subnets for the default node group. Complete the creation of the node group.

    1. Instance type: It is recommended to select the default instance type of 2C4G.

    2. AMI Type: Change to Amazon Linux 2 (AL2_x86_64).

    3. Quantity: It is recommended to have 2-3 instances to meet the requirements of the EKS system components.

    4. Subnet: It is recommended to specify the subnets needed for EKS deployment.

Danger

Note: Ensure that AMI Type is selected as Amazon Linux 2; the default Amazon Linux 2023 is not supported yet.

Step 5: Create a Dedicated AutoMQ EKS Node Group

Refer to Overview▸, users need to create a dedicated node group that meets the requirements for AutoMQ to apply for machines for subsequent deployment instances. Follow the steps below to create a compliant node group and complete IAM authorization.

  1. Go to the details of the EKS cluster created in Step 3, click on the Compute menu, and Create Node Group.
  1. Select the node group IAM Role. Refer to the screenshot below, select the IAM Role created in step 2, and configure the taint. The key of the taint is dedicated, the value is automq, and the effect is NO_SCHEDULE.
  1. Select the instance type, quantity, and the availability zone subnet for the AutoMQ dedicated node group. Complete the node group creation.
Tip

When creating a node group, only single availability zone or three availability zones are supported. If other quantities of availability zones are selected, instances cannot be created later.

Parameter Settings
Value Description
Machine Configuration
  • Description: Specify the machine type for the node group, please refer to the document Overview▸. Fill in the machine type.
Danger

Note: AutoMQ must run in the specified machine type VM. If a non-preset machine type is selected when creating the node group, the node group will not be usable for subsequent operations.

AMI Type
Danger

Note: Be sure to select Amazon Linux 2 for the AMI Type. The default Amazon Linux 2023 is currently unsupported.

Subnet
  • Explanation: Based on the actual needs of the AutoMQ cluster, select one or three zone aware subnets.
Danger

Note:
AutoMQ requires that the availability zone and node group for subsequently created clusters must be completely consistent. Therefore, if you need to create a single-zone AutoMQ cluster, create a single-zone node group here; if you need to create a three-zone AutoMQ cluster, create a three-zone node group here as well. They cannot be mixed.

Number
  • Explanation: It is recommended to start with 3 nodes, the minimum number of nodes is set to 3, and the maximum number of nodes should be reasonably evaluated based on the scale of the AutoMQ cluster.

Step 6: Initialize the Local Kubectl and AWS CLI Tools

After the AWS EKS cluster is created, some system plugins such as CSI and NetworkPolicy components are not installed by default and need to be manually configured. You need to install AWS CLI and Kubectl tools locally.

  1. Install AWS CLI and Kubectl tools. Refer to the following documentation:

    1. Install AWS CLI tools.

    2. Install Kubectl tools.

    3. Install Helm tools.

  2. Enter the command below to generate the KubeConfig configuration file. This will generate the configuration file in the default path (~/.kube/config). Users can also specify the path to generate the configuration file, but make sure to configure the environment variable KUBECONFIG.


// replace the region and cluster-name param
aws eks update-kubeconfig --region <region> --name <cluster-name>

Step 7: Configure EKS AutoScaler

To automatically scale EKS Nodes when creating AutoMQ instances and during scaling scenarios, it is necessary to configure the EKS AutoScaler to achieve on-demand node scaling. Follow the configuration steps below:

  1. Download the AutoScaler configuration file from this link.

  2. Modify the EKS cluster name parameter in the configuration file, save it, and execute the installation command.

  1. Execute the installation command to install the AutoScaler.

// Check the config yaml path
kubectl apply -f cluster-autoscaler-autodiscover.yaml

After the installation is complete, you can check the running status of the EKS AutoScaler components. If you find the corresponding Running Deployment, it indicates a successful installation.

Step 8: Configure EKS CSI Storage Plugin

When creating an EKS cluster, the EKS CSI storage plugin is not created by default and needs to be configured manually. Configuration documentation can be found here.

  1. Refer to the EKS documentation and copy the OIDC Connect Provider URL of the EKS cluster.
  1. Go to the IAM console to create an OIDC Provider for EKS to obtain IAM identity. Fill in the configuration information as follows:

    1. Provider Type: Select OpenID Connect.

    2. Provider URL: Enter the URL copied from the previous step.

    3. Audience: Fill in sts.amazonaws.com.

  1. Create a CSI-specific IAM Role based on this Web Identity. Fill in the configuration items as follows:

    1. Trusted entity type: Choose Web identity.

    2. Identity Provider: Select the Identity Provider created in the previous step.

    3. Audience: Fill in sts.amazonaws.com.

    4. Policy: Select AmazonEBSCSIDriverPolicy.

  2. After creating the IAM Role, go to the details page of the Role and click Edit trust policy. Add the following line to the existing JSON file.

Tip

Note to modify the RegionCode and ProviderID, with ProviderID something like EXAMPLED539D4633E53DE1B7XXXXAMPLE.


"oidc.eks.{RegionCode}.amazonaws.com/id/{ProviderID}:sub": "system:serviceaccount:kube-system:ebs-csi-controller-sa"

  1. Go to the EKS cluster, select the AddOn Tab, and add the Amazon EBS CSI Driver. Make sure you select the IAM Role created in the previous step.

Step 9: Enable EKS NetworkPolicy

AutoMQ supports controlling access to the cluster by restricting clients from specific IP sources. This functionality is based on NetworkPolicy, so you need to enable EKS NetworkPolicy.

  1. In the EKS console, select the AWS VPC CNI under the Add-on tab and click Edit.
  1. Expand the Optional configuration settings and add the following JSON under Configuration values. Select Override mode and save.

{
"enableNetworkPolicy": "true"
}

Step 10: Install AWS Load Balancer Controller

AWS does not provide the Load Balancer Controller installation by default, requiring users to manually install the plugin. The installation steps are as follows:

  1. Add the Helm repository and install the AWS Load Balancer CRD.

helm repo add eks https://aws.github.io/eks-charts
kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller/crds?ref=master"

  1. Modify the command below to replace eks-cluster-id with the actual cluster ID and execute it.

helm upgrade -i aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=<eks-cluster-id>

  1. Verify the installation results.

Step 11: Enter the Environment Console and Create Deployment Configurations

When first entering the AutoMQ BYOC console, you need to create deployment configurations and set Kubernetes cluster information and authorization to operate normally.

  1. Navigate to the EKS cluster, click the Access menu, and create an Access Entry.
  1. Choose the AutoMQ specific IAM Role generated by the installation environment console in step 1, and set the type to Standard.

Select the authorized Policy as AmazonEKSClusterAdminPolicy, with the scope set to Cluster, and create it.

  1. Log in to the console, enter the cluster name and other configurations, and click Next.

    1. Deployment Type: Select Kubernetes.

    2. Kubernetes Cluster: Enter the EKS cluster name.

    3. DNS ZoneId: Enter the ZoneId of the Route53 PrivateZone used for deploying AutoMQ.

    4. Bucket Name: Enter the data bucket name used for deploying AutoMQ to store messages, supporting the addition of multiple S3 Buckets.

  1. After filling in the cloud resource information, generate the necessary permissions for the GKE node pool in the data plane. Refer to the console instructions and create a custom authorization policy. Then bind the authorization policy to the AutoMQ IAM role created in Step 2 and enter the name of the node pool role, click Next to preview.
  1. Preview the deployment configuration information, complete the creation. You can then go to the instance management page to create an instance.