Install Calico CNI plugin on Amazon EKS Kubernetes cluster

You can download this article in PDF format via the link below to support us.
Download the guide in PDF formatturn off

If you are using Amazon EKS to run a Kubernetes cluster in the AWS cloud, the default container network interface (CNI) plugin for Kubernetes is Amazon vpc-cni-k8s. By using this CNI plugin, your Kubernetes Pod will have the same IP address in the Pod as on the VPC network. The problem with this CNI is the large number of VPC IP addresses required to run and manage large clusters. This is why you can choose other CNI plugins like Calico.

Calico is a free-to-use open source network and network security plug-in that supports multiple platforms, including Docker EE, OpenShift, Kubernetes, OpenStack and bare metal services. Calico provides true cloud-native scalability and ultra-fast performance. With Calico, you can choose to use Linux eBPF or the highly optimized standard network pipe of the Linux kernel to provide high-performance networks.

For a multi-tenant Kubernetes environment where tenant isolation is the key, Calico network policy implementation can be used to achieve network segmentation and tenant isolation. You can easily create network entry and exit rules to ensure that the correct network control is applied to the service.

Install Calico CNI plugin on Amazon EKS Kubernetes cluster

Before implementing the solution, the following points need to be noted:

  • If it does not support using Fargate with Amazon EKS Calico.
  • If you have rules outside of Calico policy, please consider adding existing rules iptables The rules in your Calico strategy to avoid the rules outside the Calico strategy being overwritten by Calico.
  • If you are using Pod safety group, The traffic to the Pod on the branch network interface will not be enforced by Calico network policy, and is limited to Amazon EC2 security group enforcement

Step 1: Set up the EKS cluster

I assume you have a newly created EKS Kubernetes cluster. Our guide can be used to deploy an EKS cluster as follows.

Use EKS to easily set up a Kubernetes cluster on AWS

After the cluster is running, please confirm it is available via eksctl:

$ eksctl get cluster -o yaml
- name: My-EKS-Cluster
  region: eu-west-1

Step 2: Delete the AWS VPC Internet Pod

Since we will use Calico for networking in the EKS cluster, it must be deleted node The daemon is set to disable the AWS VPC network of the Pod.

$ kubectl delete ds aws-node -n kube-system
daemonset.apps "aws-node" deleted

Confirm that all aws-node Pods have been deleted.

$ kubectl get pods -n kube-system
NAME                       READY   STATUS    RESTARTS   AGE
coredns-6987776bbd-4hj4v   1/1     Running   0          15h
coredns-6987776bbd-qrgs8   1/1     Running   0          15h
kube-proxy-mqrrk           1/1     Running   0          14h
kube-proxy-xx28m           1/1     Running   0          14h

Step 3: Install Calico CNI on the EKS Kubernetes cluster

Download the Calico Yaml list.

wget https://docs.projectcalico.org/manifests/calico-vxlan.yaml

Then use the manifest file yaml file to deploy Calico CNI on the Amazon EKS cluster.

kubectl apply -f calico-vxlan.yaml

This is the output of my deployment, showing all the objects being created.

configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

Get the list of DaemonSet deployed in the kube-system namespace.

$ kubectl get ds calico-node --namespace kube-system

NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
calico-node   2         2         0       2            0           kubernetes.io/os=linux   14s

of Calcium nodules DaemonSet should have the required number of Pods in the ready state.

$ kubectl get ds calico-node --namespace kube-system

NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
calico-node   2         2         2       2            2           kubernetes.io/os=linux   48s

You can also use the kubectl command to check the running Pod.

$ kubectl get pods -n kube-system | grep calico
calico-node-bmshb                                     1/1     Running   0          4m7s
calico-node-skfpt                                     1/1     Running   0          4m7s
calico-typha-69f668897f-zfh56                         1/1     Running   0          4m11s
calico-typha-horizontal-autoscaler-869dbcdddb-6sx2h   1/1     Running   0          4m7s

Step 4: Create a new node group and delete the old node group

If you have added nodes to the cluster, you need to add another node group, and then delete the old node group and the computers in it.

To create other node groups, use:

eksctl create nodegroup --cluster=<clusterName> [--name=<nodegroupName>]

List your clusters to get the cluster name:

$ eksctl get cluster

You can create node groups from CLI or Config files.

  • Create node group from CLI
eksctl create nodegroup --cluster <clusterName> --name <nodegroupname> --node-type <instancetype> --node-ami auto

To change the maximum number of Pods per node, add:

--max-pods-per-node <maxpodsnumber>

example:

eksctl create nodegroup --cluster my-eks-cluster --name eks-ng-02 --node-type t3.medium --node-ami auto --max-pods-per-node 150
  • Create from configuration file-update nodeGroups section.See yes
nodeGroups:
  - name: eks-ng-01
    labels: { role: workers }
    instanceType: t3.medium
    desiredCapacity: 2
    volumeSize: 80
    minSize: 2
    maxSize: 3
    privateNetworking: true

  - name: eks-ng-02
    labels: { role: workers }
    instanceType: t3.medium
    desiredCapacity: 2
    volumeSize: 80
    minSize: 2
    maxSize: 3
    privateNetworking: true

For hosting replacement nodeGroups versus managedNodeGroups. When finished, apply the configuration to create the node group.

eksctl create nodegroup --config-file=my-eks-cluster.yaml

After creating a new node group, please delete the old node group for vigilance and migrate all pods.

eksctl delete nodegroup --cluster=<clusterName> --name=<nodegroupName>

Or from the configuration file:

eksctl delete nodegroup --config-file=my-eks-cluster.yaml --include=<nodegroupName> --approve

If you check the nodes in the cluster, first disable scheduling:

$ kubectl get nodes
NAME                                           STATUS                     ROLES    AGE     VERSION
ip-10-255-101-100.eu-west-1.compute.internal   Ready                      <none>   3m57s   v1.17.11-eks-cfdc40
ip-10-255-103-17.eu-west-1.compute.internal    Ready,SchedulingDisabled   <none>   15h     v1.17.11-eks-cfdc40
ip-10-255-96-32.eu-west-1.compute.internal     Ready                      <none>   4m5s    v1.17.11-eks-cfdc40
ip-10-255-98-25.eu-west-1.compute.internal     Ready,SchedulingDisabled   <none>   15h     v1.17.11-eks-cfdc40

After a few minutes, they will be deleted.

$ kubectl get nodes
NAME                                           STATUS   ROLES    AGE     VERSION
ip-10-255-101-100.eu-west-1.compute.internal   Ready    <none>   4m45s   v1.17.11-eks-cfdc40
ip-10-255-96-32.eu-west-1.compute.internal     Ready    <none>   4m53s   v1.17.11-eks-cfdc40

If you describe a new Pod, you should pay attention to the change of its IP address:

$ kubectl describe pods coredns-6987776bbd-mvchx -n kube-system
Name:                 coredns-6987776bbd-mvchx
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 ip-10-255-101-100.eu-west-1.compute.internal/10.255.101.100
Start Time:           Mon, 26 Oct 2020 15:24:16 +0300
Labels:               eks.amazonaws.com/component=coredns
                      k8s-app=kube-dns
                      pod-template-hash=6987776bbd
Annotations:          cni.projectcalico.org/podIP: 192.168.153.129/32
                      cni.projectcalico.org/podIPs: 192.168.153.129/32
                      eks.amazonaws.com/compute-type: ec2
                      kubernetes.io/psp: eks.privileged
Status:               Running
IP:                   192.168.153.129
IPs:
  IP:           192.168.153.129
Controlled By:  ReplicaSet/coredns-6987776bbd
....

Step 5: Install calicoctl command line tool

of Calcetaol Enables cluster users to read, create, update and delete Calico objects from the command line interface. Run the following command to install calicoctl.

Linux:

curl -s https://api.github.com/repos/projectcalico/calicoctl/releases/latest | grep browser_download_url | grep linux-amd64 | grep -v wait | cut -d '"' -f 4 | wget -i -
chmod +x calicoctl-linux-amd64
sudo mv calicoctl-linux-amd64 /usr/local/bin/calicoctl

Apple system:

curl -s https://api.github.com/repos/projectcalico/calicoctl/releases/latest | grep browser_download_url | grep darwin-amd64| grep -v wait | cut -d '"' -f 4 | wget -i -
chmod +x calicoctl-darwin-amd64
sudo mv calicoctl-darwin-amd64 /usr/local/bin/calicoctl

How to read next Configuration calicoctl Connect to your data store.

You can download this article in PDF format via the link below to support us.
Download the guide in PDF formatturn off

Sidebar