Use EKS to easily set up a Kubernetes cluster on AWS

You can download this article in PDF format via the link below to support us.

Download the guide in PDF format

turn off


There is no doubt that Kubernetes is the most advanced and widely used container orchestration platform, which can provide support for millions of applications in a production environment. For most new Linux and Kubernetes users, a big challenge is setting up a cluster. Although we have many guides on Kubernetes cluster installation and configuration, this is our first guide on setting up a Kubernetes cluster in an AWS cloud environment using Amazon EKS.

For users who are not familiar with Amazon EKS, it is a managed service that allows you to easily run Kubernetes on AWS without having to install, operate and maintain your own Kubernetes control plane or nodes. It runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Because Amazon EKS is fully compatible with the Community version of Kubernetes, you can easily migrate any standard Kubernetes application to Amazon EKS without any code modification.

Amazon EKS eliminates the trouble surrounding high availability by automatically detecting and replacing unhealthy control plane instances. It is also easy to perform upgrades in an automated version. Amazon EKS is integrated with many AWS services to provide scalability and security for your applications, including:

  • Amazon ECR for container images
  • Elastic load balancing for load distribution
  • IAM for authentication
  • Amazon VPC isolation

How to deploy a Kubernetes cluster on AWS using EKS

The next section will introduce in more depth how to install a Kubernetes cluster on AWS using Amazon EKS hosting service. The setting diagram is shown in the figure below.

Step 1: Install and configure AWS CLI tools

Since our installation will be based on the command line, we need to set up the AWS CLI tool. This is done on the local workstation computer. Our installation is suitable for Linux and macOS.

                        --- Install AWS CLI on macOS ---
curl "" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /

--- Install AWS CLI on Linux ---
curl "" -o ""
sudo ./aws/install

Then, you can use the following command to determine the version of AWS CLI installed.

                        $ aws --version
aws-cli/2.0.38 Python/3.7.3 Linux/4.18.0-193.6.3.el8_2.x86_64 exe/x86_64.

Configure AWS CLI credentials

After installation, we need to configure our AWS CLI credentials. We will use the aws configure command to set up a regular installation of the AWS CLI.

                        $ aws configure
AWS Access Key ID [None]: 
AWS Secret Access Key [None]: 
Default region name [None]: 
Default output format [None]: json

Your AWS CLI details will be saved in ~/.aws table of Contents:

                        $ ls ~/.aws

Step 2: Install eksctl on Linux | Apple System

eksctl It is a simple CLI tool used to create an EKS cluster on AWS. The tool is written in Go language and uses CloudFormation. With this tool, you can have a running cluster in minutes.

At the time of writing, it has the following features:

  • Create, get, list and delete clusters
  • Create, clear and delete node groups
  • Expand node group
  • Update cluster
  • Use custom AMI
  • Configure VPC network
  • Configure access to API endpoints
  • Support GPU node group
  • Spot instance and hybrid instance
  • IAM management and additional policies
  • List the cluster Cloudformation stack
  • Install coredns
  • Write the kubeconfig file for the cluster

Use the following command to install the eksctl tool on a Linux or macOS machine.

                        --- Linux ---
curl --silent --location "$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin

--- macOS ---
/bin/bash -c "$(curl -fsSL"
brew tap weaveworks/tap
brew install weaveworks/tap/eksctl
brew upgrade eksctl && brew link --overwrite eksctl # When upgrading

Use the following command to test whether the installation is successful.

                        $ eksctl version

Enable S​​hell to complete:

                        --- Bash ---
echo ". <(eksctl completion bash)" >> ~/.bashrc

--- Zsh ---
mkdir -p ~/.zsh/completion/
eksctl completion zsh > ~/.zsh/completion/_eksctl
# and put the following in ~/.zshrc:
fpath=($fpath ~/.zsh/completion)

# Note if you're not running a distribution like oh-my-zsh you may first have to enable autocompletion:
autoload -U compinit

Step 3: Install and configure kubectl on Linux | Apple System

of Kubectl The command line tool is used to control the Kubernetes cluster from the command line interface. Install the tool by running the following command in the terminal.

                        --- Linux ---
curl -o kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin

--- macOS ---
curl -o kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin


After installing kubectl, you can verify its version with the following command:

                        $ kubectl version --short --client
Client Version: v1.17.7-eks-bffbac

kubectl tool will be in $ HOME / .kube table of Contents. You can also specify other kubeconfig files by setting the KUBECONFIG environment variable or by setting the –kubeconfig flag.

Step 4: Create an Amazon EKS cluster and calculate

With all the dependency settings, we can now create an Amazon EKS cluster with computing options to run our microservice application. We will perform the installation of the latest Kubernetes version available in Amazon EKS so that we can take advantage of the latest EKS features.

You can choose to use a computing option to create a cluster, and then add any other options after the cluster is created. You can use two standard calculation options.

  • AWS Fargate : Create a cluster that only runs Linux applications on AWS Fargate. You can only use AWS Fargate with Amazon EKS Some areas
  • Managed node : If you want to run Linux applications on Amazon EC2 instances.

In this setup, we will install an EKS cluster running the Kubernetes version 1.17 And use hosting EC2 compute nodes. These are my cluster details:

  • Region: Ireland ( eu-west-1 )
  • Cluster name: cs-dev-eks-cluster
  • Version: 1.17 – view all Available EKS versions
  • Node type: t3.medium – view all AWS node type Available
  • Total number of nodes (for static ASG): 2
  • Maximum number of nodes in ASG: 3
  • The minimum number of nodes in ASG: 1 piece
  • SSH public key for the node (import from local path, or use an existing EC2 key pair): ~/.ssh/
  • Make the node group network private
  • Let eksctl manage the cluster credentials under the ~/.kube/eksctl/clusters directory,
                        eksctl create cluster 
--version 1.17 
--name prod-eks-cluster 
--region eu-west-1 
--nodegroup-name eks-ec2-linux-nodes 
--node-type t3.medium 
--nodes 2 
--nodes-min 1 
--nodes-max 3 
--ssh-public-key ~/.ssh/ 
--verbose 3

The eksctl installer will automatically create and configure VPC, Internet gateway, nat gateway and routing table for you.


Please be patient, as the installation may take some time.

                        [ℹ]  eksctl version 0.25.0
[ℹ]  using region eu-west-1
[ℹ]  setting availability zones to [eu-west-1a eu-west-1c eu-west-1b]
[ℹ]  subnets for eu-west-1a - public: private:
[ℹ]  subnets for eu-west-1c - public: private:
[ℹ]  subnets for eu-west-1b - public: private:
[ℹ]  using SSH public key "/Users/jkmutai/.cheat/.ssh/" as "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes-52:ad:b5:4f:a6:01:10:b6:c1:6b:ba:eb:5a:fb:0c:b2"
[ℹ]  using Kubernetes version 1.17
[ℹ]  creating EKS cluster "prod-eks-cluster" in "eu-west-1" region with managed nodes
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-1 --cluster=prod-eks-cluster'
[ℹ]  CloudWatch logging will not be enabled for cluster "prod-eks-cluster" in "eu-west-1"
[ℹ]  you can enable it with 'eksctl utils update-cluster-logging --region=eu-west-1 --cluster=prod-eks-cluster'
[ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "prod-eks-cluster" in "eu-west-1"
[ℹ]  2 sequential tasks: { create cluster control plane "prod-eks-cluster", 2 sequential sub-tasks: { no tasks, create managed nodegroup "eks-ec2-linux-nodes" } }
[ℹ]  building cluster stack "eksctl-prod-eks-cluster-cluster"
[ℹ]  deploying stack "eksctl-prod-eks-cluster-cluster"
[ℹ]  building managed nodegroup stack "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes"
[ℹ]  deploying stack "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes"
[ℹ]  waiting for the control plane availability...
[✔]  saved kubeconfig as "/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster"
[ℹ]  no tasks
[✔]  all EKS cluster resources for "prod-eks-cluster" have been created
[ℹ]  nodegroup "eks-ec2-linux-nodes" has 4 node(s)
[ℹ]  node "" is ready
[ℹ]  node "" is ready
[ℹ]  node "" is ready
[ℹ]  node "" is ready
[ℹ]  waiting for at least 1 node(s) to become ready in "eks-ec2-linux-nodes"
[ℹ]  nodegroup "eks-ec2-linux-nodes" has 4 node(s)
[ℹ]  node "" is ready
[ℹ]  node "" is ready
[ℹ]  node "" is ready
[ℹ]  node "" is ready
[ℹ]  kubectl command should work with "/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster", try 'kubectl --kubeconfig=/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster get nodes'
[✔]  EKS cluster "prod-eks-cluster" in "eu-west-1" region is ready

To list the available clusters, use the following command:

                        $ eksctl get cluster
prod-eks-cluster	eu-west-1

Use to produce kubeconfig File to confirm whether the installation was successful.

                        $ kubectl --kubeconfig=/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster get nodes
NAME                                           STATUS   ROLES    AGE   VERSION   Ready    
                             18m   v1.17.9-eks-4c6976   Ready    
                               14m   v1.17.9-eks-4c6976   Ready    
                                 14m   v1.17.9-eks-4c6976

$ kubectl --kubeconfig=/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster get pods -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-254fk             1/1     Running   0          19m
kube-system   aws-node-nmjwd             1/1     Running   0          14m
kube-system   aws-node-z47mq             1/1     Running   0          15m
kube-system   coredns-6987776bbd-8s5ct   1/1     Running   0          14m
kube-system   coredns-6987776bbd-bn5js   1/1     Running   0          14m
kube-system   kube-proxy-79bcs           1/1     Running   0          14m
kube-system   kube-proxy-bpznt           1/1     Running   0          15m
kube-system   kube-proxy-xchxs           1/1     Running   0          19m

Get information about the node group used:

                        $ eksctl get nodegroup --cluster prod-eks-cluster
prod-eks-cluster	eks-ec2-linux-nodes	2020-08-11T19:21:46Z	1		4		3			t3.medium

Create a cluster with an existing private subnet:

When using existing public and private subnets, you need

                        eksctl create cluster 
  --version 1.17 
  --name prod-eks-cluster 
  --region eu-west-1 
  --nodegroup-name eks-ec2-linux-nodes 
  --node-type t3.medium 
  --nodes 2 
  --nodes-min 1 
  --nodes-max 3 
  --ssh-public-key ~/.ssh/ 
  --verbose 3

Delete EKS cluster

If you want to delete the EKS cluster, you need to use eksctl delete command.

                        $ eksctl delete cluster --region=eu-west-1 --name=prod-eks-cluster

The output of the deletion process is similar to the following.

                        [ℹ]  eksctl version 0.25.0
[ℹ]  using region eu-west-1
[ℹ]  deleting EKS cluster "prod-eks-cluster"
[ℹ]  deleted 0 Fargate profile(s)
[ℹ]  cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
[ℹ]  2 sequential tasks: { delete nodegroup "eks-ec2-linux-nodes", delete cluster control plane "prod-eks-cluster" [async] }
[ℹ]  will delete stack "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes"
[ℹ]  waiting for stack "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes" to get deleted
[ℹ]  will delete stack "eksctl-prod-eks-cluster-cluster"
[✔]  all cluster resources were deleted

Before we can use the EKS service to end the Kubernetes cluster setup on AWS, this article will be updated with other settings.

Similar guides:

Use kubeadm to install Kubernetes cluster on Ubuntu 20.04

Install Kubernetes cluster on CentOS 7 using kubeadm

Check Pod/container metrics on OpenShift and Kubernetes

Use Kompose to migrate Docker Compose applications to Kubernetes

You can download this article in PDF format via the link below to support us.

Download the guide in PDF format

turn off


Related Posts