Use Kubespray to deploy a highly available Kubernetes cluster on CentOS 7

You can download this article in PDF format via the link below to support us.Download the guide in PDF formatClose

As the title indicates, in this guide, we will focus on building a highly available Kubernetes cluster through HAProxy and Keepalived to ensure that all services can be performed as usual when any master node encounters technical difficulties. We will use the power of Kubespray to make our work as simple as possible.

For the architecture, the diagrams below the installation prerequisites section are very clear to you. We will install HAProxy and Keepalived on the three master nodes to make them coexist with etd and api-server. In addition, in this setup, we will use containerd instead of docker as the container runtime.

With this, you will continue to use docker to build images, and Kubernetes will use containerd to pull and run them.

Installation prerequisites

In order for this deployment to begin and proceed successfully, we will need an additional server or computer as the installation server. This machine will contain the Kubespray files and will connect to the server where kubernetes will be installed, and then continue to install kubernetes on it. The figure below simplifies the deployment architecture with three master nodes, three etcd and two worker nodes.

  • prod-master1 10.38.87.251
  • prod-master2 10.38.87.252
  • prod-master3 10.38.87.253
  • Production worker 1 10.38.87.254
  • prod-worker2 10.38.87.249
  • Virtual IP used for Keepalived: 10.38.87.250

Image source: https://www.programmersought.com

Make sure to generate SSH keys and copy your public key to all CentOS 7 servers where Kubernetes will be built.

Step 1: Prepare the server

Preparing the server is a crucial step to ensure that all aspects of the deployment run smoothly to the end. In this step, we will perform a simple update, install haproxy and keepalived on the master node, and ensure that important software packages are installed. Issue the following commands in each server to enable all functions.

sudo yum -y update

On the master node, install haproxy as follows and keep it connected

sudo yum install epel-release
sudo yum install haproxy keepalived -y

Configure SELinux as follows to be permissible on all master and worker nodes

sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Step 2: Configure Keepalived

From it GitHub page, Keepalived implements a set of checkers to dynamically and adaptively maintain and manage load-balanced server pools based on their health status. On the other hand, the Virtual Router Redundancy Protocol (VRRP) can achieve high availability.

On the first master server, configure keepalived as follows:

$ sudo vim /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
}

vrrp_instance VI_1 {
  interface eth0
  state MASTER
  advert_int 1
  virtual_router_id 51
  priority 101
  unicast_src_ip 10.38.87.251    ##Master 1 IP Address
  unicast_peer {
      10.38.87.252               ##Master 2 IP Address
      10.38.87.253               ##Master 2 IP Address
   }
  virtual_ipaddress {
    10.38.87.250                 ##Shared Virtual IP address
  }
  track_script {
    chk_haproxy
  }
}

On the second master server, configure keepalived as follows:

$ sudo vim /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
}

vrrp_instance VI_1 {
  interface eth0
  state BACKUP
  advert_int 3
  virtual_router_id 50
  priority 100
  unicast_src_ip 10.38.87.252    ##Master 2 IP Address
  unicast_peer {
      10.38.87.253               ##Master 3 IP Address
      10.38.87.251               ##Master 1 IP Address
   }
  virtual_ipaddress {
    10.38.87.250                 ##Shared Virtual IP address
  }
  track_script {
    chk_haproxy
  }
}

On the third master server, configure keepalived as follows:

$ sudo vim /etc/keepalived/keepalived.conf
vrrp_script chk_haproxy {
  script "killall -0 haproxy"
  interval 2
  weight 2
}

vrrp_instance VI_1 {
  interface eth0
  state BACKUP
  advert_int 3
  virtual_router_id 49
  priority 99
  unicast_src_ip 10.38.87.253    ##Master 3 IP Address
  unicast_peer {
      10.38.87.251               ##Master 1 IP Address
      10.38.87.252               ##Master 2 IP Address
   }
  virtual_ipaddress {
    10.38.87.250                 ##Shared Virtual IP address
  }
  track_script {
    chk_haproxy
  }
}
  • vrrp_instance defines a single instance of the VRRP protocol running on the interface. It has been arbitrarily named VI_1.
  • state defines the initial state the instance should start with.
  • interface defines the interface on which VRRP is running.
  • virtual_router_id is the unique identifier of the node.
  • Priority is the ad priority you learned about in the first article in this series. As you will learn in the next article, the priority can be adjusted at runtime.
  • advert_int specifies the frequency of sending advertisements (3 seconds in this case).
  • Authentication specifies the information required by servers participating in VRRP to authenticate each other. In this case, it has not been configured yet.
  • virtual_ipaddress defines the IP address that VRRP is responsible for (there can be multiple).

Start and enable keepalived

After completing the configuration in each master node, start and enable keepalived as shown below

sudo systemctl start keepalived
sudo systemctl enable keepalived

After running Keepalived on each node, you should see a new IP in the interface as shown below

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000        
    link/ether 52:54:00:f2:92:fd brd ff:ff:ff:ff:ff:ff
    inet 10.38.87.252/24 brd 10.38.87.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 10.38.87.250/32 scope global eth0

Step 3: Configure HAproxy

HAProxy is a free, very fast and reliable solution that provides high availability, load balancing and proxy for TCP and HTTP based applications. It is particularly suitable for websites with very high traffic and provides support for many of the most visited websites in the world. Over the years, it has become the de facto standard open source load balancer and is now available with most mainstream Linux distributions.

We will configure HAProxy in the three master nodes as follows:

$ sudo vim /etc/haproxy/haproxy.cfg
global

    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000
#---------------------------------------------------------------------
# apiserver frontend which proxys to the masters
#---------------------------------------------------------------------
frontend apiserver
    bind *:8443
    mode tcp
    option tcplog
    default_backend apiserver
#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
    option httpchk GET /healthz
    http-check expect status 200
    mode tcp
    option ssl-hello-chk
    balance     roundrobin
        server prod-master1 10.38.87.251:6443 check
        server prod-master2 10.38.87.252:6443 check
        server prod-master3 10.38.87.253:6443 check

After completing the configuration details, just allow the configured ports on the firewall, and then start and enable the haproxy service.

sudo firewall-cmd --permanent --add-port=8443/tcp && sudo firewall-cmd --reload 
sudo systemctl restart haproxy
sudo systemctl enable haproxy

Step 4: Clone the Kubespray Git repository and add configuration

In this step, we will obtain the Kubespray file on the local computer (installer computer), and then perform the necessary configuration by selecting containerd as the container runtime and filling the necessary files with the details of our server (such as the main server). , staff member).

cd ~
git clone https://github.com/kubernetes-sigs/kubespray.git
Cloning into 'kubespray'...

Go to the project directory:

$ cd kubespray

This directory contains manifest files and scripts for deploying Kubernetes.

Step 5: Prepare the local computer

On the local computer from which you want to run the deployment, you need to install the pip Python package manager.

curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python3 get-pip.py --user

Step 6: Create a Kubernetes cluster manifest file and install dependencies

The list consists of 3 groups:

  • kube node : A list of kubernetes nodes running Pod.
  • Master Cooper : A list of servers running kubernetes main components (apiserver, scheduler, controller).
  • Wait: The list of servers that make up the etcd server. You should have at least 3 servers for failover.

There are also two special groups:

  • calico : Explained for advanced Calico network cases
  • fortress : If the node cannot be accessed directly, configure the bastion host

Create the manifest file:

cp -rfp inventory/sample inventory/mycluster

Use the server’s IP address to define the list and map to the correct node purpose.

$ vim inventory/mycluster/inventory.ini

master0   ansible_host=10.38.87.251 ip=10.38.87.251
master1   ansible_host=10.38.87.252 ip=10.38.87.252
master2   ansible_host=10.38.87.253 ip=10.38.87.253
worker1   ansible_host=10.38.87.254 ip=10.38.87.254
worker2   ansible_host=10.38.87.249 ip=10.38.87.249

# ## configure a bastion host if your nodes are not directly reachable
# bastion ansible_host=x.x.x.x ansible_user=some_user
[kube-master]
master0
master1
master2

[etcd]
master0
master1
master2

[kube-node]
worker1
worker2

[calico-rr]

[k8s-cluster:children]
kube-master
kube-node
calico-rr

Add the A record to /etc/hosts on the workstation.

$ sudo vim /etc/hosts
master0 10.38.87.251
master1 10.38.87.252
master2 10.38.87.253
worker1 10.38.87.254
worker2 10.38.87.249

If your ssh private key has a passphrase, save it before starting deployment.

$ eval `ssh-agent -s` && ssh-add
Agent pid 4516
Enter passphrase for /home/centos/.ssh/id_rsa: 
Identity added: /home/tech/.ssh/id_rsa (/home/centos/.ssh/id_rsa)

Install dependencies from requirements.txt

# Python 2.x
sudo pip install --user -r requirements.txt

# Python 3.x
sudo pip3 install -r requirements.txt

Confirm that the installation is correct.

$ ansible --version
ansible 2.9.6
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/tech/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.8.5 (default, Jan 28 2021, 12:59:40) [GCC 9.3.0]

View and change the parameters under manifest/mycluster/group_vars

We will check and change the parameters under manifest/mycluster/group_vars to ensure that Kubespray uses containerization.

##Change from docker to containerd at around line 176 and add the two lines below

$ vim inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
container_manager: containerd
etcd_deployment_type: host
kubelet_deployment_type: host

Then make the following changes in the “list/mycluster/group_vars/all/all.yml” file

$ vim inventory/mycluster/group_vars/all/all.yml
##Add Load Balancer Details at around line 20
apiserver_loadbalancer_domain_name: "haproxy.computingforgeeks.com"    
loadbalancer_apiserver:
   address: 10.38.87.250
   port: 8443

## Deactivate Internal loadbalancers for apiservers at around line 26
loadbalancer_apiserver_localhost: false

Ensure that your node can resolve the load balancer domain name.

Step 7: Allow necessary Kubernetes ports on the firewall

Kubernetes uses many ports to provide different services. Therefore, we need to allow them to be accessed on the firewall as follows.

On the three master nodes, the following ports are allowed

sudo firewall-cmd --permanent --add-port={6443,2379-2380,10250-10252,179}/tcp --add-port=4789/udp && sudo firewall-cmd --reload

On the worker node, allow the necessary ports as follows:

sudo firewall-cmd --permanent --add-port={10250,30000-32767,179}/tcp --add-port=4789/udp && sudo firewall-cmd --reload

Then allow ip forwading on all nodes, as shown below:

sudo modprobe br_netfilter
sudo sh -c "echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables"
sudo sh -c "echo '1' > /proc/sys/net/ipv4/ip_forward"

Step 8: Deploy a Kubernetes cluster using Kubespray Ansible Playbook

Now execute the playbook to use Ansible to deploy production-ready Kubernetes. Please note that the target server must have access to the Internet to extract images.

Start the deployment by running the following command:

ansible-playbook -i inventory/mycluster/inventory.ini --become 
--user=tech --become-user=root cluster.yml

Replace “tech” with the remote user and ansible will connect to the node. You should not get failed tasks in execution. The final message looks like the screenshot shared below.

After the script is executed to the end, log in to the master node and check the cluster status.

$ sudo kubectl cluster-info
Kubernetes master is running at https://haproxy.computingforgeeks.com:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

You can also check the node

$ sudo kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
master0   Ready    master   33h   v1.19.5
master1   Ready    master   29h   v1.19.5
master2   Ready    master   29h   v1.19.5
worker1   Ready    <none>   29h   v1.19.5
worker2   Ready    <none>   29h   v1.19.5

Step 6: Install the Kubernetes dashboard (optional)

If you have no other choice to access the Kubernetes cluster through a cool interface like Lens, this is an optional step. To install the dashboard, follow the detailed guide below.

How to install the Kubernetes dashboard using NodePort

And once it is running, you will need to create an administrator user to access your cluster. Please use the following guidelines to resolve this issue:

Create an admin user to access the Kubernetes dashboard

in conclusion

Kubespray makes Kubernetes deployment a breeze. Thanks to the team that developed the script involved in implementing this complex deployment, we now have a ready platform, waiting for your application to serve the world. If you plan to set up a larger cluster, just place the various components (etcd, master, worker, etc.) in the deployment script and Kubespray will handle the rest. May your year flourish, your efforts will yield fruitful results, and your investment will be rewarded. Let us face it with tenacity, laughter, hard work and grace.

Other guides you might like:

Use Weave Scope to monitor Docker containers and Kubernetes

Kubectl cheat sheet for Kubernetes administrators and CKA exam preparation

Install Grafana on Kubernetes for cluster monitoring

You can download this article in PDF format via the link below to support us.Download the guide in PDF formatClose

Sidebar