Use Kubespray to deploy a Kubernetes cluster on Debian 10

You can download this article in PDF format via the link below to support us.Download the guide in PDF formatClose

2020 is a stormy year, which is a self-evident consensus. As the new year approaches, in fact, painful memories are engraved on the minds and hearts of many people around the world. Lessons have been learned and opportunities are realized, but the most important thing is that you are safe. With the coming of 2021, we hope that the dark clouds that have shrouded in the past year will be cleared, and the pure sunlight will illuminate our hopes and dreams.

In this guide, we will focus on Kubernetes. Yes, this platform will continue to change the way we deploy and manage business applications. Whether it’s CI/CD or you prefer the manual approach, Kubernetes is still the best choice for processing, managing, scaling, and orchestrating applications. For those who don’t know, Kubernetes will deprecate dockershim in future versions, which means they will not support docker as a container runtime. Before you lose your temper, it is important to note that the changes you make will not affect the way you build and deploy images. Docker contains many components that Kubernetes does not need to use. Remember, they require a simple lightweight container runtime to start and run the container.

As you have already guessed, you can continue to use docker to build images, and Kubernetes will use other container runtimes (such as containerd or CRI-O) to pull and run them. Containered and CRI-O are very lightweight and can be perfectly inserted into the Container Runtime Interface specification.

Installation prerequisites

In order for this deployment to begin and proceed successfully, we will need an additional server or computer as the installation server. This machine will contain the Kubespray files and will connect to the server where kubernetes will be installed, and then proceed to install Kubernetes on it. The following figure simplifies the deployment architecture, which has a master node, an etcd and two worker nodes.

Make sure to generate the SSH key on the builder machine and copy the public key to the Debian 10 server where Kubernetes will be built.

In summary, we will use Kubespray to install Kubernetes on the Debian 10 server, and we will use containerd as the container runtime. The following steps are sufficient to prepare your cluster to house your application.

Step 1: Prepare the server

Preparing the server is a crucial step to ensure that all aspects of the deployment run smoothly to the end. In this step, we will perform a simple update and make sure that important packages are installed. Issue the following commands in each server to start all work.

sudo apt update
sudo apt upgrade

Step 2: Clone the Kubespray Git repository and add configuration

In this step, we will obtain the Kubespray file in the local computer (installer computer), and then perform the necessary configuration by selecting containerd as the container runtime and filling the necessary files with the details of our server (such as the main server) . , staff member).

$ cd ~
$ git clone https://github.com/kubernetes-sigs/kubespray.git
Cloning into 'kubespray'...

Go to the project directory:

$ cd kubespray

This directory contains manifest files and scripts for deploying Kubernetes.

Step 3: Prepare the local computer

On the local computer from which you want to run the deployment, you need to install the pip Python package manager.

curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python3 get-pip.py --user

Step 4: Create a Kubernetes cluster manifest file and install dependencies

The list consists of 3 groups:

  • kube-node: The list of kubernetes nodes where the Pod will run.
  • kube-master: The list of servers that will run the main components of kubernetes (apiserver, scheduler, controller).
  • etcd: The list of servers that make up the etcd server. You should have at least 3 servers for failover.

There are also two special groups:

  • calico-rr: Explanation of advanced Calico network cases
  • Bastion: If you cannot directly access the node, please configure the bastion host

Create the manifest file:

cp -rfp inventory/sample inventory/mycluster

Use the server’s IP address to define the list and map to the correct node purpose.

$ vim inventory/mycluster/inventory.ini
master0   ansible_host=172.20.193.154 ip=172.20.193.154
worker1   ansible_host=172.20.198.157 ip=172.20.198.157
worker2   ansible_host=172.20.202.161 ip=172.20.202.161

# ## configure a bastion host if your nodes are not directly reachable
# bastion ansible_host=x.x.x.x ansible_user=some_user
[kube-master]
master0

[etcd]
master0

[kube-node]
worker1
worker2

[calico-rr]

[k8s-cluster:children]
kube-master
kube-node
calico-rr

Add the A record to /etc/hosts on the workstation.

$ sudo vim /etc/hosts
172.20.193.154 master0
172.20.198.157 worker1
172.20.202.161 worker2

If your ssh private key has a passphrase, save it before starting deployment.

$ eval `ssh-agent -s` && ssh-add
Agent pid 4516
Enter passphrase for /home/centos/.ssh/id_rsa: 
Identity added: /home/centos/.ssh/id_rsa (/home/centos/.ssh/id_rsa)

Install dependencies from requirements.txt

# Python 2.x
sudo pip install --user -r requirements.txt

# Python 3.x
sudo pip3 install -r requirements.txt

Confirm that the installation is correct.

$ ansible --version
ansible 2.9.6
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/tech/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]

View and change the parameters under manifest/mycluster/group_vars

We will check and change the parameters under manifest/mycluster/group_vars to ensure that Kubespray uses containerization.

##Change from docker to containerd at around line 176 and add the two lines below

$ vim inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml

container_manager: containerd
etcd_deployment_type: host
kubelet_deployment_type: host

Step 5: Deploy Kubernetes cluster using Kubespray Ansible Playbook

Now execute the playbook to use Ansible to deploy production-ready Kubernetes. Please note that the target server must have access to the Internet to extract images.

Start the deployment by running the following command:

ansible-playbook -i inventory/mycluster/inventory.ini --become 
--user=tech --become-user=root cluster.yml

Replace “tech” with the remote user and ansible will connect to the node. You should not get failed tasks in execution.

The installation progress should be as shown below

kubespray progress

The final message looks like the screenshot shared below.

kubespray review

After the script is executed to the end, log in to the master node and check the cluster status.

$ sudo kubectl cluster-info
Kubernetes master is running at https://172.20.193.154:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

You can also check the node

$ sudo kubectl get nodes

NAME      STATUS   ROLES    AGE   VERSION
master0   Ready    master   11m   v1.19.5
worker1   Ready    <none>   10m   v1.19.5
worker2   Ready    <none>   10m   v1.19.5

$ sudo kubectl get endpoints -n kube-system

NAME                      ENDPOINTS                                                     AGE
coredns                   10.233.101.1:53,10.233.103.1:53,10.233.101.1:53 + 3 more...   23m
kube-controller-manager   <none>                                                        27m
kube-scheduler            <none>                                                        27m

Step 6: Install the Kubernetes dashboard and access permissions

If you have no other choice to access the Kubernetes cluster through a cool interface like Lens or VMware Octant, this is an optional step. To install the dashboard, follow the detailed guide below.

How to install the Kubernetes dashboard using NodePort

And once it is running, you will need to create an administrator user to access your cluster. Please use the following guidelines to resolve this issue:

Create an admin user to access the Kubernetes dashboard

If you want, you can also use Active Directory to verify users through how to use Active Directory to verify users on the Kubernetes dashboard.

Step 7: Install the Nginx-Ingress controller

In this step, we will include an ingress controller to help us access our services from outside the cluster. The easiest way to enable external access to the service is to use the NodePort service type. The disadvantage of NodePort is that the service must use a limited range of ports (by default, the range is 30000 to 32767), and only one port can be mapped to a single service.

Ingress resources make it possible to expose multiple services using a single external endpoint, a load balancer, or both. Using this approach, teams can develop hosts, prefixes, and other rules to route traffic to the service resources they define.

Therefore, we will install the Nginx Ingress controller and configure it so that we can access a sample application that will be installed.To install the Nginx Ingress controller, download the following checklist and apply it to your cluster

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.43.0/deploy/static/provider/baremetal/deploy.yaml

Then use kubectl to install

sudo kubectl apply -f deploy.yaml

Confirm that the ingress controller Pod is running after a period of time

$ sudo kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-f7d8c        0/1     Completed   0          109m
ingress-nginx-admission-patch-4fxtb         0/1     Completed   0          109m
ingress-nginx-controller-85df779996-b9c2k   1/1     Running     0          109m

Next, we will deploy the httpbin application and then use our ingress controller to access it. Get httpbin as follows

wget https://github.com/istio/istio/raw/master/samples/httpbin/httpbin.yaml

Then use kubectl to install

sudo kubectl apply -f httpbin.yaml

If you wish to deploy it in another namespace, you only need to edit it before applying the file.

Step 8: Add entry rules to access your service

So far, we have a working ingress controller and a sample deployment (httpbin), which we will use to test how it works. Create the following Ingress resources that will target httpbin. If you read the list we obtained earlier, you’ll notice that it creates a file called “httpbin“And will be 8000 port. With this information, let’s create an Ingress:

$ vim httpbin-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: httpbin-ingress
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - host: master.computingforgeeks.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: httpbin
                port:
                  number: 8000

Save the file, and then apply it to the cluster. Please note that “master.computingforgeeks.com” must be resolved to the IP of the entrance, as shown below.

$ sudo kubectl apply -f httpbin-ingress.yaml
ingress.networking.k8s.io/httpbin-ingress configured

Then confirm that the entry has been successfully created

$ sudo kubectl get ing
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME              CLASS    HOSTS                           ADDRESS          PORTS   AGE
approot           <none>   master1.computingforgeeks.com   172.20.202.161   80      168m
httpbin-ingress   <none>   master.computingforgeeks.com    172.20.202.161   80      108m

What’s happening here is that any traffic from the root URL of “master.computingforgeeks.com” will be automatically routed to the httpbin service. Isn’t it beautiful?

Apart from testing whether we can reach the service level, what else do we have to do? First, let’s study the appearance of the Ingress Controller service.

$ sudo kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.233.8.37    <none>        80:30242/TCP,443:31635/TCP   123m       
ingress-nginx-controller-admission   ClusterIP   10.233.51.71   <none>        443/TCP                      123m 

You will notice that our “ingress-nginx-controller” is exposed through NodePort. This is very good information because we may get frustrated when accessing the application in other ways. With this in mind, let us now open the browser and point it to the application on //master.computingforgeeks.com:30242. And make sure that port 30242 is opened in the firewall of the “master.computingforgeeks.com” node. You should see something like the image below:

kubespray httpbin entrance

in conclusion

Kubespray makes Kubernetes deployment a breeze. Thanks to the team that developed the script involved in implementing this complex deployment, we now have a ready platform, waiting for your application to serve the world.

If you plan to set up a larger cluster, just place the various components (etcd, master, worker, etc.) in the deployment script and Kubespray will handle the rest. May your year flourish, your efforts will yield fruitful results, and your investment will be rewarded. Let us face it with tenacity, laughter, hard work and grace.

Other guides you might like:

How to install Active Directory Domain Services in Windows Server 2019

Use Weave Scope to monitor Docker containers and Kubernetes

Kubectl cheat sheet for Kubernetes administrators and CKA exam preparation

Use Splunk forwarder to send logs to Splunk on Kubernetes

How to ship Kubernetes logs to external Elasticsearch

Use Ceph RBD to persistently store Kubernetes

You can download this article in PDF format via the link below to support us.Download the guide in PDF formatClose

Sidebar