Kubernetes cluster deployment on CentOS 7 / CentOS 8 using Ansible and Calico CNI

You want to set up a three-node Kubernetes cluster for your development project on CentOS 7 / CentOS 8 – 1 master with two Or more worker nodes? This guide will guide you through the steps to set up a Kubernetes cluster on CentOS 8 / CentOS 7 Linux machines running Ansible and Calico CNI with a firewall configured. Kubernetes (K8s) is an open source system for automating the deployment, expansion, and management of containerized applications.

Similar Kubernetes deployment guides:

  • How to use K3 to deploy a lightweight Kubernetes cluster in 5 minutes
  • Deploy a Kubernetes cluster ready with Ansible and Kubespray

My lab is based on the following environments:

machine Types ofCPU nameIP address
Control nodek8smaster01.computingforgeeks.com192.168.122.10
Working node 1k8snode01.computingforgeeks.com192.168.122.11
Working node 2k8snode01.computingforgeeks.com192.168.122.12

First, make sure your system is updated and all dependencies are installed, including container runtime, Kubernetes packages, and configuring a firewall for k8.

Step 1: Set standard requirements

I wrote an Ansible role to prepare standard Kubernetes nodes. The role contains the following tasks:

  • Install required basic packages
  • Set standard system requirements-disable swap, modify sysctl, disable SELinux
  • Install and configure the container runtime of your choice-cri-o, Docker, Containerd
  • Install Kubernetes packages-kubelet, kubeadm and kubectl
  • Configure Firewalld on Kubernetes master and worker nodes

Visit my Github page to set it up:

https://github.com/jmutai/k8s-pre-bootstrap

This is the output I executed recently:

TASK [kubernetes-bootstrap : Open flannel ports on the firewall] ***************************************************************************************
skipping: [k8smaster01] => (item=8285) 
skipping: [k8smaster01] => (item=8472) 
skipping: [k8snode01] => (item=8285) 
skipping: [k8snode01] => (item=8472) 
skipping: [k8snode02] => (item=8285) 
skipping: [k8snode02] => (item=8472) 

TASK [kubernetes-bootstrap : Open calico UDP ports on the firewall] ************************************************************************************
ok: [k8snode01] => (item=4789)
ok: [k8smaster01] => (item=4789)
ok: [k8snode02] => (item=4789)

TASK [kubernetes-bootstrap : Open calico TCP ports on the firewall] ************************************************************************************
ok: [k8snode02] => (item=5473)
ok: [k8snode01] => (item=5473)
ok: [k8smaster01] => (item=5473)
ok: [k8snode01] => (item=179)
ok: [k8snode02] => (item=179)
ok: [k8smaster01] => (item=179)
ok: [k8snode02] => (item=5473)
ok: [k8snode01] => (item=5473)
ok: [k8smaster01] => (item=5473)

TASK [kubernetes-bootstrap : Reload firewalld] *********************************************************************************************************
changed: [k8smaster01]
changed: [k8snode01]
changed: [k8snode02]

PLAY RECAP *********************************************************************************************************************************************
k8smaster01                : ok=23   changed=3    unreachable=0    failed=0    skipped=11   rescued=0    ignored=0   
k8snode01                  : ok=23   changed=3    unreachable=0    failed=0    skipped=11   rescued=0    ignored=0   
k8snode02                  : ok=23   changed=3    unreachable=0    failed=0    skipped=11   rescued=0    ignored=0   

Step 2: Initialize the single-node control plane

This deployment is for a single control plane node with integrated etcd. If you want to execute multiple control nodes (3 for HA), check out Create high availability cluster with kubeadm Official guide.

We will use kubeadm to bootstrap the smallest viable Kubernetes cluster in accordance with best practices. The benefit of kubeadm is that it supports other cluster lifecycle features, such as upgrade, DowngradeAnd management Boot token.

The single control node bootloader requires:

  • Control node machine’s default IP address
  • DNS name / load balancer IP (if more control nodes are planned to be added in the future)
  • SSH access as root or sudo

Log in to the control node:

$ ssh [email protected]

Check the parameters that can be used to initialize the Kubernetes cluster:

$ kubeadm init --help

The standard parameters we will use are:

  • --pod-network-cidr : Used to specify the IP address range of the Pod network.
  • --apiserver-advertise-address: The API server will announce the IP address it is listening on.
  • --control-plane-endpoint: Specify a stable IP address or DNS name for the control plane.
  • --upload-certs: Upload the control plane certificate to kubeadm-certs Secret.
  • If Calico is used, the recommended Pod network is: 192.168.0.0/16
  • For flannel, it is recommended to set the Pod network to 10.244.0.0/16

For me, I would run the following command:

sudo kubeadm init 
  --apiserver-advertise-address=192.168.122.10 
  --pod-network-cidr 192.168.0.0/16 
  --upload-certs

To be able to upgrade it Single control surface kubeadm cluster High availability You should specify –control-plane-endpoint to set a shared endpoint for all control plane nodes. Such an endpoint can be the DNS name or IP address of a load balancer.

kubeadm init 
  --apiserver-advertise-address=192.168.122.227 
  --pod-network-cidr 192.168.0.0/16 
  --control-plane-endpoint 
  --upload-certs

This is my installation output:

W0109 20:27:51.787966   18069 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0109 20:27:51.788126   18069 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
	[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8smaster01.computingforgeeks.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.122.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8smaster01.computingforgeeks.com localhost] and IPs [192.168.122.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8smaster01.computingforgeeks.com localhost] and IPs [192.168.122.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0109 20:32:51.776569   18069 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0109 20:32:51.777334   18069 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.507327 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
bce5c1ad320f4c64e42688e25526615d2ffd7efad3e749bc0c632b3a7834752d
[mark-control-plane] Marking the node k8smaster01.computingforgeeks.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8smaster01.computingforgeeks.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: nx1jjq.u42y27ip3bhmj8vj
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.122.10:6443 --token nx1jjq.u42y27ip3bhmj8vj 
    --discovery-token-ca-cert-hash sha256:c6de85f6c862c0d58cc3d10fd199064ff25c4021b6e88475822d6163a25b4a6c

Copy the kubectl configuration file.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Also checkout: Easily manage multiple Kubernetes clusters with kubectl and kubectx

Deploy Pod network to cluster

I will use Calico, but you are free to use any other Pod Network plugin of your choice.

kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

As shown in the output below, this will create many resources.

configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

Use the following command to confirm that all Pods are running.

watch kubectl get pods --all-namespaces

Once everything is working as expected, the output will look like this.

NAMESPACE     NAME                                                        READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-5c45f5bd9f-c8mwx                    1/1     Running   0          3m45s
kube-system   calico-node-m5qmb                                           1/1     Running   0          3m45s
kube-system   coredns-6955765f44-cz65r                                    1/1     Running   0          9m43s
kube-system   coredns-6955765f44-mtch2                                    1/1     Running   0          9m43s
kube-system   etcd-k8smaster01.computingforgeeks.com                      1/1     Running   0          9m59s
kube-system   kube-apiserver-k8smaster01.computingforgeeks.com            1/1     Running   0          9m59s
kube-system   kube-controller-manager-k8smaster01.computingforgeeks.com   1/1     Running   0          9m59s
kube-system   kube-proxy-bw494                                            1/1     Running   0          9m43s
kube-system   kube-scheduler-k8smaster01.computingforgeeks.com            1/1     Running   0          9m59s

Please note that each pod has STATUS of Running.

Check Calico Document more details.

Step 3: Add the worker nodes to the cluster

Now that you have the control nodes ready, you can add new nodes where workloads (containers, pods, etc.) will run. You need to do this on every computer you use to run the Pod.

  • SSH to the machine
$ ssh [email protected]
  • Run the command output by kubeadm init. E.g:
sudo kubeadm join 192.168.122.10:6443 --token nx1jjq.u42y27ip3bhmj8vj 
    --discovery-token-ca-cert-hash sha256:c6de85f6c862c0d58cc3d10fd199064ff25c4021b6e88475822d6163a25b4a6c

If the token has expired, you can use the following command to generate a new token:

kubeadm token create

Get token

kubeadm token list

You can use the following command to get the value of Discovery-token-ca-cert-hash:

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

This is a connection command output:


[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Run the same join command on all other Worker nodes, and then look at the available nodes to join the cluster using:

$ kubectl get nodes
NAME                                STATUS   ROLES    AGE     VERSION
k8smaster01.computingforgeeks.com   Ready    master   26m     v1.17.0
k8snode01.computingforgeeks.com     Ready       4m35s   v1.17.0
k8snode02.computingforgeeks.com     Ready       2m4s    v1.17.0

Step 4: Deploy Metrics Server to a Kubernetes cluster

Metrics Server is a cluster-wide aggregator for resource usage data. It collects metrics from the summary API, by Kubelet On each node. Use the following guidelines for deployment:

How to deploy Metrics Server to a Kubernetes cluster

With it, you now have a running Kubernetes cluster that you can use to develop Cloud native applications. We also provide additional guides on Kubernetes, such as:

How to manually pull out the container image used by Kubernetes kubeadm

Installing and using Helm 3 on a Kubernetes cluster

Installing and using Helm 2 on a Kubernetes cluster

Create Kubernetes service / user account and restrict it to one namespace using RBAC

How to configure Kubernetes dynamic volume configuration with Heketi and GlusterFS

Sidebar