Use EKS to easily set up a Kubernetes cluster on AWS

To
You can download this article in PDF format via the link below to support us.

Download the guide in PDF format

turn off
To

To
To

There is no doubt that Kubernetes is the most advanced and widely used container orchestration platform, which can provide support for millions of applications in a production environment. For most new Linux and Kubernetes users, a big challenge is setting up a cluster. Although we have many guides on Kubernetes cluster installation and configuration, this is our first guide on setting up a Kubernetes cluster in an AWS cloud environment using Amazon EKS.

For users who are not familiar with Amazon EKS, it is a managed service that allows you to easily run Kubernetes on AWS without having to install, operate and maintain your own Kubernetes control plane or nodes. It runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Because Amazon EKS is fully compatible with the Community version of Kubernetes, you can easily migrate any standard Kubernetes application to Amazon EKS without any code modification.

Amazon EKS eliminates the trouble surrounding high availability by automatically detecting and replacing unhealthy control plane instances. It is also easy to perform upgrades in an automated version. Amazon EKS is integrated with many AWS services to provide scalability and security for your applications, including:

  • Amazon ECR for container images
  • Elastic load balancing for load distribution
  • IAM for authentication
  • Amazon VPC isolation

How to deploy a Kubernetes cluster on AWS using EKS

The next section will introduce in more depth how to install a Kubernetes cluster on AWS using Amazon EKS hosting service. The setting diagram is shown in the figure below.

Step 1: Install and configure AWS CLI tools

Since our installation will be based on the command line, we need to set up the AWS CLI tool. This is done on the local workstation computer. Our installation is suitable for Linux and macOS.

--- Install AWS CLI on macOS ---
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /

--- Install AWS CLI on Linux ---
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

Then, you can use the following command to determine the version of AWS CLI installed.

$ aws --version
aws-cli/2.0.38 Python/3.7.3 Linux/4.18.0-193.6.3.el8_2.x86_64 exe/x86_64.centos.8

Configure AWS CLI credentials

After installation, we need to configure our AWS CLI credentials. We will use the aws configure command to set up a regular installation of the AWS CLI.

$ aws configure
AWS Access Key ID [None]: 
AWS Secret Access Key [None]: 
Default region name [None]: 
Default output format [None]: json

Your AWS CLI details will be saved in ~/.aws table of Contents:

$ ls ~/.aws
config
credentials

Step 2: Install eksctl on Linux | Apple System

eksctl It is a simple CLI tool used to create an EKS cluster on AWS. The tool is written in Go language and uses CloudFormation. With this tool, you can have a running cluster in minutes.

At the time of writing, it has the following features:

  • Create, get, list and delete clusters
  • Create, clear and delete node groups
  • Expand node group
  • Update cluster
  • Use custom AMI
  • Configure VPC network
  • Configure access to API endpoints
  • Support GPU node group
  • Spot instance and hybrid instance
  • IAM management and additional policies
  • List the cluster Cloudformation stack
  • Install coredns
  • Write the kubeconfig file for the cluster

Use the following command to install the eksctl tool on a Linux or macOS machine.

--- Linux ---
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin

--- macOS ---
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
brew tap weaveworks/tap
brew install weaveworks/tap/eksctl
brew upgrade eksctl && brew link --overwrite eksctl # When upgrading

Use the following command to test whether the installation is successful.

$ eksctl version
0.25.0

Enable S​​hell to complete:

--- Bash ---
echo ". <(eksctl completion bash)" >> ~/.bashrc

--- Zsh ---
mkdir -p ~/.zsh/completion/
eksctl completion zsh > ~/.zsh/completion/_eksctl
# and put the following in ~/.zshrc:
fpath=($fpath ~/.zsh/completion)

# Note if you're not running a distribution like oh-my-zsh you may first have to enable autocompletion:
autoload -U compinit
compinit

Step 3: Install and configure kubectl on Linux | Apple System

of Kubectl The command line tool is used to control the Kubernetes cluster from the command line interface. Install the tool by running the following command in the terminal.

--- Linux ---
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.17.7/2020-07-08/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin

--- macOS ---
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.17.7/2020-07-08/bin/darwin/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin

After installing kubectl, you can verify its version with the following command:

$ kubectl version --short --client
Client Version: v1.17.7-eks-bffbac

kubectl tool will be in $ HOME / .kube table of Contents. You can also specify other kubeconfig files by setting the KUBECONFIG environment variable or by setting the –kubeconfig flag.

Step 4: Create an Amazon EKS cluster and calculate

With all the dependency settings, we can now create an Amazon EKS cluster with computing options to run our microservice application. We will perform the installation of the latest Kubernetes version available in Amazon EKS so that we can take advantage of the latest EKS features.

You can choose to use a computing option to create a cluster, and then add any other options after the cluster is created. You can use two standard calculation options.

  • AWS Fargate: Create a cluster that only runs Linux applications on AWS Fargate. You can only use AWS Fargate with Amazon EKS Some areas
  • Managed node: If you want to run Linux applications on Amazon EC2 instances.

Use EKS to easily set up a Kubernetes cluster on AWS

In this setup, we will install an EKS cluster running the Kubernetes version 1.17 And use hosting EC2 compute nodes. These are my cluster details:

  • Region: Ireland (eu-west-1)
  • Cluster name: cs-dev-eks-cluster
  • Version: 1.17 – view all Available EKS versions
  • Node type: t3.medium – view all AWS node type Available
  • Total number of nodes (for static ASG): 2
  • Maximum number of nodes in ASG: 3
  • The minimum number of nodes in ASG: 1 piece
  • SSH public key for the node (import from local path, or use an existing EC2 key pair): ~/.ssh/eks.pub
  • Make the node group network private
  • Let eksctl manage the cluster credentials under the ~/.kube/eksctl/clusters directory,
eksctl create cluster 
--version 1.17 
--name prod-eks-cluster 
--region eu-west-1 
--nodegroup-name eks-ec2-linux-nodes 
--node-type t3.medium 
--nodes 2 
--nodes-min 1 
--nodes-max 3 
--ssh-access 
--ssh-public-key ~/.ssh/eks.pub 
--managed 
--auto-kubeconfig 
--node-private-networking 
--verbose 3

The eksctl installer will automatically create and configure VPC, Internet gateway, nat gateway and routing table for you.

Use EKS to easily set up a Kubernetes cluster on AWS

Subnet:

Use EKS to easily set up a Kubernetes cluster on AWS

Please be patient, as the installation may take some time.

[ℹ]  eksctl version 0.25.0
[ℹ]  using region eu-west-1
[ℹ]  setting availability zones to [eu-west-1a eu-west-1c eu-west-1b]
[ℹ]  subnets for eu-west-1a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for eu-west-1c - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for eu-west-1b - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  using SSH public key "/Users/jkmutai/.cheat/.ssh/eks.pub" as "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes-52:ad:b5:4f:a6:01:10:b6:c1:6b:ba:eb:5a:fb:0c:b2"
[ℹ]  using Kubernetes version 1.17
[ℹ]  creating EKS cluster "prod-eks-cluster" in "eu-west-1" region with managed nodes
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-1 --cluster=prod-eks-cluster'
[ℹ]  CloudWatch logging will not be enabled for cluster "prod-eks-cluster" in "eu-west-1"
[ℹ]  you can enable it with 'eksctl utils update-cluster-logging --region=eu-west-1 --cluster=prod-eks-cluster'
[ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "prod-eks-cluster" in "eu-west-1"
[ℹ]  2 sequential tasks: { create cluster control plane "prod-eks-cluster", 2 sequential sub-tasks: { no tasks, create managed nodegroup "eks-ec2-linux-nodes" } }
[ℹ]  building cluster stack "eksctl-prod-eks-cluster-cluster"
[ℹ]  deploying stack "eksctl-prod-eks-cluster-cluster"
[ℹ]  building managed nodegroup stack "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes"
[ℹ]  deploying stack "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes"
[ℹ]  waiting for the control plane availability...
[✔]  saved kubeconfig as "/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster"
[ℹ]  no tasks
[✔]  all EKS cluster resources for "prod-eks-cluster" have been created
[ℹ]  nodegroup "eks-ec2-linux-nodes" has 4 node(s)
[ℹ]  node "ip-192-168-21-191.eu-west-1.compute.internal" is ready
[ℹ]  node "ip-192-168-35-129.eu-west-1.compute.internal" is ready
[ℹ]  node "ip-192-168-49-234.eu-west-1.compute.internal" is ready
[ℹ]  node "ip-192-168-78-146.eu-west-1.compute.internal" is ready
[ℹ]  waiting for at least 1 node(s) to become ready in "eks-ec2-linux-nodes"
[ℹ]  nodegroup "eks-ec2-linux-nodes" has 4 node(s)
[ℹ]  node "ip-192-168-21-191.eu-west-1.compute.internal" is ready
[ℹ]  node "ip-192-168-35-129.eu-west-1.compute.internal" is ready
[ℹ]  node "ip-192-168-49-234.eu-west-1.compute.internal" is ready
[ℹ]  node "ip-192-168-78-146.eu-west-1.compute.internal" is ready
[ℹ]  kubectl command should work with "/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster", try 'kubectl --kubeconfig=/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster get nodes'
[✔]  EKS cluster "prod-eks-cluster" in "eu-west-1" region is ready

To list the available clusters, use the following command:

$ eksctl get cluster
NAME			REGION
prod-eks-cluster	eu-west-1

Use to produce kubeconfig File to confirm whether the installation was successful.

$ kubectl --kubeconfig=/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster get nodes
NAME                                           STATUS   ROLES    AGE   VERSION
ip-192-168-21-191.eu-west-1.compute.internal   Ready       18m   v1.17.9-eks-4c6976
ip-192-168-35-129.eu-west-1.compute.internal   Ready       14m   v1.17.9-eks-4c6976
ip-192-168-78-146.eu-west-1.compute.internal   Ready       14m   v1.17.9-eks-4c6976

$ kubectl --kubeconfig=/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster get pods -A
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE
kube-system   aws-node-254fk             1/1     Running   0          19m
kube-system   aws-node-nmjwd             1/1     Running   0          14m
kube-system   aws-node-z47mq             1/1     Running   0          15m
kube-system   coredns-6987776bbd-8s5ct   1/1     Running   0          14m
kube-system   coredns-6987776bbd-bn5js   1/1     Running   0          14m
kube-system   kube-proxy-79bcs           1/1     Running   0          14m
kube-system   kube-proxy-bpznt           1/1     Running   0          15m
kube-system   kube-proxy-xchxs           1/1     Running   0          19m

Get information about the node group used:

$ eksctl get nodegroup --cluster prod-eks-cluster
CLUSTER			NODEGROUP		CREATED			MIN SIZE	MAX SIZE	DESIRED CAPACITY	INSTANCE TYPE	IMAGE ID
prod-eks-cluster	eks-ec2-linux-nodes	2020-08-11T19:21:46Z	1		4		3			t3.medium

Create a cluster with an existing private subnet:

When using existing public and private subnets, you need

eksctl create cluster 
  --version 1.17 
  --name prod-eks-cluster 
  --region eu-west-1 
  --nodegroup-name eks-ec2-linux-nodes 
  --node-type t3.medium 
  --nodes 2 
  --nodes-min 1 
  --nodes-max 3 
  --ssh-access 
  --ssh-public-key ~/.ssh/eks.pub 
  --managed 
  --vpc-private-subnets=subnet-0597dd879c602d516,subnet-06dcc9817981d25db,subnet-0c4a73dfb9857be6a 
  --vpc-public-subnets=subnet-025b7029b62f7f922,subnet-03d1c9ee286b5e9e2,subnet-04218d8a1bf2acb11 
  --auto-kubeconfig 
  --node-private-networking 
  --verbose 3

Delete EKS cluster

If you want to delete the EKS cluster, you need to use eksctl delete command.

$ eksctl delete cluster --region=eu-west-1 --name=prod-eks-cluster

The output of the deletion process is similar to the following.

[ℹ]  eksctl version 0.25.0
[ℹ]  using region eu-west-1
[ℹ]  deleting EKS cluster "prod-eks-cluster"
[ℹ]  deleted 0 Fargate profile(s)
[ℹ]  cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
[ℹ]  2 sequential tasks: { delete nodegroup "eks-ec2-linux-nodes", delete cluster control plane "prod-eks-cluster" [async] }
[ℹ]  will delete stack "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes"
[ℹ]  waiting for stack "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes" to get deleted
[ℹ]  will delete stack "eksctl-prod-eks-cluster-cluster"
[✔]  all cluster resources were deleted

Before we can use the EKS service to end the Kubernetes cluster setup on AWS, this article will be updated with other settings.

Similar guides:

Use kubeadm to install Kubernetes cluster on Ubuntu 20.04

Install Kubernetes cluster on CentOS 7 using kubeadm

Check Pod/container metrics on OpenShift and Kubernetes

Use Kompose to migrate Docker Compose applications to Kubernetes

To
You can download this article in PDF format via the link below to support us.

Download the guide in PDF format

turn off
To

To
To

Sidebar