Enable the cluster autoscaler in the EKS Kubernetes cluster

To
You can download this article in PDF format via the link below to support us.

Download the guide in PDF format

turn off
To

To
To

One Cluster autoscaler It is a Kubernetes component that can automatically adjust the size of the Kubernetes cluster so that all Pods have a place to run and there are no unnecessary nodes. It can be used with major cloud providers (GCP, AWS and Azure). In this short tutorial, we will explore how to install and configure Cluster Autoscaler in an Amazon EKS cluster. The cluster autoscaler will automatically modify your node group to expand when you need more resources and expand when resources are underutilized.

Before using this guide, you should have a functioning EKS cluster. The following guide will help you get started.

Use EKS to easily set up a Kubernetes cluster on AWS

If you want to use the Pods auto zoom function, please refer to the following guide.

Use horizontal Pod autoscaler on Kubernetes EKS cluster

Enable the cluster autoscaler in the EKS Kubernetes cluster

The cluster autoscaler needs some other IAM policies and resource tags to manage autoscale in the cluster.

Step 1: Create EKS additional IAM policy

The cluster autoscaling program requires the following IAM permissions to call AWS APIs on your behalf.

Create an IAM policy json file:

cat >aws-s3-eks-iam-policy.json<

Application policy:

aws iam create-policy --policy-name EKS-Node-group-IAM-policy --policy-document file://aws-s3-eks-iam-policy.json

This is the output of my strategy creation:

{
    "Policy": {
        "PolicyName": "EKS-Node-group-IAM-policy",
        "PolicyId": "ANPATWFKCYAHACUQCHO3D",
        "Arn": "arn:aws:iam::253750766592:policy/EKS-Node-group-IAM-policy",
        "Path": "/",
        "DefaultVersionId": "v1",
        "AttachmentCount": 0,
        "PermissionsBoundaryUsageCount": 0,
        "IsAttachable": true,
        "CreateDate": "2020-09-04T12:26:20+00:00",
        "UpdateDate": "2020-09-04T12:26:20+00:00"
    }
}

Step 2: Attach the policy to the EKS node group

If you used eksctl Use the following command to create a node group command –Asg-access The option automatically provides the required permissions and attaches them to your node IAM role.

Log in to the AWS console and go to EC2> EKS instance> Description> IAM role

Click the IAM role link to add permissions below Additional policy

Enable the cluster autoscaler in the EKS Kubernetes cluster

Attach the strategy we created earlier.

The alt attribute of the image is empty; its file name is EKS-Configure-Cluster-Autoscaler-03-1024x229.png

Confirm the settings.

Enable the cluster autoscaler in the EKS Kubernetes cluster

Do the same EKS> ClusterName> Details

Enable the cluster autoscaler in the EKS Kubernetes cluster

Make a note of the IAM ARN used by the cluster and go to IAM>role And search for it.

Enable the cluster autoscaler in the EKS Kubernetes cluster

Attach the policy we created to the role.

The alt attribute of the image is empty; its file name is EKS-Configure-Cluster-Autoscaler-03-1024x229.png

Confirm that the strategy is in the list of additional strategies.

Enable the cluster autoscaler in the EKS Kubernetes cluster

Step 3: Add node group label

The cluster autoscaler needs the following tags on the node "autoscale" group so that they can be discovered automatically.

core value
k8s.io/cluster-autoscaler/owned
k8s.io/cluster-autoscaler/enabledtrue

Navigation EKS>Cluster>Cluster Name> Calculation

Enable the cluster autoscaler in the EKS Kubernetes cluster

Select the node group and click edit

Enable the cluster autoscaler in the EKS Kubernetes cluster

Add tags at the bottom and save the changes when you are done.

Step 4: Deploy the cluster autoscaler in EKS

Log in to the machine from where the kubectl command is executed to deploy the cluster autoscaler.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

You can also download the yaml file before applying.

wget https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
kubectl apply -f ./cluster-autoscaler-autodiscover.yaml

Run the following command to add cluster-autoscaler.kubernetes.io/safe-to-evict Deployment notes:

kubectl -n kube-system annotate deployment.apps/cluster-autoscaler cluster-autoscaler.kubernetes.io/safe-to-evict="false"

Edit the Cluster Autoscaler deployment to set your cluster name and add the following options

--balance-similar-node-groups
--skip-nodes-with-system-pods=false

For my settings, please see the screenshot below.

Enable the cluster autoscaler in the EKS Kubernetes cluster

Turn on the cluster autoscaler release Find the latest Cluster Autoscaler version that matches the Kubernetes major and minor version of the cluster on the page in the web browser. For example, if the Kubernetes version of your cluster is 1.17, find the latest version of Cluster Autoscaler starting with 1.17. Record the semantic version number (1.17.n) For the next step.

Since my cluster is v1.17, I will use the latest container image version for 1.17, which is 1.17.3:

kubectl -n kube-system set image deployment.apps/cluster-autoscaler cluster-autoscaler=eu.gcr.io/k8s-artifacts-prod/autoscaling/cluster-autoscaler:v1.17.3

Check whether the cluster autoscaler Pod is running:

$ kubectl get pods -n kube-system -w
NAME                                  READY   STATUS    RESTARTS   AGE
aws-node-glfrs                        1/1     Running   0          23d
aws-node-sgh8p                        1/1     Running   0          23d
cluster-autoscaler-6f56b86d9b-p9gc7   1/1     Running   5          21m # It is running
coredns-6987776bbd-2mgxp              1/1     Running   0          23d
coredns-6987776bbd-vdn8j              1/1     Running   0          23d
efs-csi-node-p57gw                    3/3     Running   0          18d
efs-csi-node-z7gh9                    3/3     Running   0          18d
kube-proxy-5glzs                      1/1     Running   0          23d
kube-proxy-hgqm5                      1/1     Running   0          23d
metrics-server-7cb45bbfd5-kbrt7       1/1     Running   0          23d

You can view the log stream.

kubectl -n kube-system logs -f deployment.apps/cluster-autoscaler

Output:

I0904 14:28:50.937242       1 scale_down.go:431] Scale-down calculation: ignoring 1 nodes unremovable in the last 5m0s
I0904 14:28:50.937257       1 scale_down.go:462] Node ip-192-168-138-244.eu-west-1.compute.internal - memory utilization 0.702430
I0904 14:28:50.937268       1 scale_down.go:466] Node ip-192-168-138-244.eu-west-1.compute.internal is not suitable for removal - memory utilization too big (0.702430)
I0904 14:28:50.937333       1 static_autoscaler.go:439] Scale down status: unneededOnly=false lastScaleUpTime=2020-09-04 13:57:03.11117817 +0000 UTC m=+15.907067864 lastScaleDownDeleteTime=2020-09-04 13:57:03.111178246 +0000 UTC m=+15.907067938 lastScaleDownFailTime=2020-09-04 13:57:03.111178318 +0000 UTC m=+15.907068011 scaleDownForbidden=false isDeleteInProgress=false scaleDownInCooldown=false
I0904 14:28:50.937358       1 static_autoscaler.go:452] Starting scale down
I0904 14:28:50.937391       1 scale_down.go:776] No candidates for scale down

Step 5: Test the EKS cluster autoscaler

At this point, the installation is complete. Let's test it.

I have two nodes in the cluster. The maximum number set in the node group is 3.

$ $ kubectl get nodes
NAME                                            STATUS   ROLES    AGE   VERSION
ip-192-168-138-244.eu-west-1.compute.internal   Ready       23d   v1.17.9-eks-4c6976
ip-192-168-176-247.eu-west-1.compute.internal   Ready       23d   v1.17.9-eks-4c6976

We will deploy a large number of Pods to see if they will automatically scale to the maximum number of nodes set in the Node group.

$ kubectl run nginx-example --image=nginx --port=80 --replicas=100

Watch the creation of the new node.

$ watch kubectl get nodes

You should see that a new node has been created and added to the cluster.

$ kubectl get nodes
NAME                                            STATUS   ROLES    AGE   VERSION
ip-192-168-119-255.eu-west-1.compute.internal   Ready       26m   v1.17.9-eks-4c6976
ip-192-168-138-244.eu-west-1.compute.internal   Ready       23d   v1.17.9-eks-4c6976
ip-192-168-176-247.eu-west-1.compute.internal   Ready       23d   v1.17.9-eks-4c6976

Use Pods to delete the deployment, and the cluster should be scaled down.

$ kubectl delete deployment nginx-example

This is all you need to configure cluster auto-scaling in an EKS Kubernetes cluster.

More information about EKS:

Install Istio Service Mesh in EKS Kubernetes cluster

Install CloudWatch Container Insights on EKS | Kubernetes

Deploy Prometheus on the EKS Kubernetes cluster

Kubernetes learning courses:

To
You can download this article in PDF format via the link below to support us.

Download the guide in PDF format

turn off
To

To
To

Sidebar