How to install MicroK8s Kubernetes cluster on CentOS 8

To
You can download this article in PDF format via the link below to support us.

Download the guide in PDF format

turn off
To

To
To

MicroK8s is an upstream Kubernetes deployment certified by CNCF, which can be completely run on your workstation or edge device. As a snapshot, it can package all Kubernetes services locally (that is, no virtual machines), and package the entire set of libraries and binaries required. Installation is limited by the speed at which you can download hundreds of megabytes, and the deletion of MicroK8 does not affect any sources. Ubuntu page.

CNCF runs a Kubernetes certification program to ensure consistency, thereby ensuring smooth interoperability from one Kubernetes installation to the next. Software consistency ensures that each vendor’s version of Kubernetes supports the required APIs, as does the open source community version. CNCF website.

In this guide, we will complete the following tasks together:

  • Install Kubernetes cluster using MicroK8s
  • Enable core Kubernetes add-ons, such as dns and dashboard
  • Deploy Pod and add new node
  • Configuration storage
  • Enable logging, Prometheus and Grafana monitoring
  • Configure the registry

All we need is a Linux distribution that supports Snap. In this guide, we will continue to use CentOS8. let’s start.

Step 1: Update the server and install the snapshot

In order to start on a clean and ready platform, we will update the server to get the latest patches and software, add Epel Repository, and then install our snapshot package from Epel. Run the following commands to complete this task.

sudo dnf install epel-release -y
sudo dnf update
sudo dnf -y install snapd

Disable SELinux

If you put SELinux in mandatory mode, turn it off or use permissive mode.

sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config

After the software package is installed, the systemd unit that manages the main snapshot communication socket needs to be enabled as follows:

sudo systemctl enable --now snapd.socket

Also, to enable classic snapshot support, enter the following to create a symbolic link between /var/lib/snapd/snap and /snap, and then add snap to the PATH variable

sudo ln -s /var/lib/snapd/snap /snap
echo 'export PATH=$PATH:/var/lib/snapd/snap/bin' | sudo tee -a /etc/profile.d/mysnap.sh

After that, please log out and log in again, or restart the system to ensure that the snapshot path is updated correctly. The snapshot will now be installed. To test it, we can search for a package and see if it works as expected:

$ snap find microk8s

Name      Version  Publisher   Notes    Summary
microk8s  v1.19.0  canonical✓  classic  Lightweight Kubernetes for workstations and appliances

Step 2: Install MicroK8 on CentOS 8

Now that our server has been updated and Snap is installed, we are ready to comfortably obtain MicroK8 and start using it to test and run our applications in a container. To install MicroK8, please run the simple snap command below and we will be set up. This is the beauty of Snappy.

$ sudo snap install microk8s --classic
microk8s v1.19.0 from Canonical✓ installed

If you do not add the –classic switch, an error will occur. So please add it.

To be able to run the microk8s command as a sudo user, you must add the user to the microk8s group, then log out and log in again. Add users as follows:

sudo usermod -aG microk8s $USER
sudo chown -f -R $USER ~/.kube

Onc permissions have been implemented, please log out and log in again.

After that, we can view the installed snapshot

$ snap list

Name      Version      Rev   Tracking       Publisher   Notes  
core      16-2.45.3.1  9804  latest/stable  canonical✓  core   
microk8s  v1.19.0      1668  latest/stable  canonical✓  classic

In order to add new nodes later, we will need to open the port on the server. If you are running a firewall in the server, this rule applies. Add the port as follows:

sudo firewall-cmd  --permanent --add-port={10255,12379,25000,16443,10250,10257,10259,32000}/tcp
sudo firewall-cmd --reload

Step 3: Manage MicroK8 on CentOS 8

Now that MicroK8s has been installed on our server, we can start rolling. To manage MicroK8 (ie start, status, stop, enable, disable, list nodes, etc.), just do the following:

#####Check Status#####

$ microk8s status

microk8s is running
high-availability: no
  datastore master nodes: 127.0.0.1:19001
  datastore standby nodes: none

#####Stop MicroK8s#####

$ microk8s stop

stop of [microk8s.daemon-apiserver microk8s.daemon-apiserver-kicker microk8s.daemon-cluster-agent microk8s.daemon-containerd microk8s.daemon-contr…Stopped

#####Start MicroK8s#####

$ microk8s start

Started.

#####List MicroK8s Nodes#####

$ microk8s kubectl get nodes

NAME     STATUS   ROLES    AGE    VERSION
master   Ready       2m6s   v1.19.0-34+1a52fbf0753680

#####Disable MicroK8s#####

$ sudo snap disable microk8s

#####Enable MicroK8s#####

$ sudo snap enable microk8s

Great stuff! Our MicroK8s has been installed and can respond to our commands without any complaints. Let us move on to the next step.

Step 4: Deploy the Pod and enable the dashboard

Here, we will continue to deploy Pod and enable dashboards to simplify our work with good visual effects. Let’s deploy a sample redis pod as follows:

$ microk8s kubectl create deployment my-redis --image=redis
deployment.apps/my-redis created

List deployed pods

$ microk8s kubectl get pods --all-namespaces

NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE  
kube-system   calico-kube-controllers-847c8c99d-f7zd2   1/1     Running   2          3h48m
kube-system   calico-node-xxhwx                         1/1     Running   2          3h48m
default       my-redis-56dcdd56-tlfpf                   1/1     Running   0          70s

And our recent Redis Pod is on the rise!

If you wish to log in to the redis instance, please follow the instructions below:

$ microk8s kubectl exec -it my-redis-56dcdd56-tlfpf -- bash
[email protected]:/data#

To check the Pod’s log, make sure to include its respective namespace, as it will only check “default“Namespace (if not provided).

$ microk8s kubectl logs my-redis-56dcdd56-tlfpf -n default

1:C 14 Sep 2020 12:59:32.350 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 14 Sep 2020 12:59:32.350 # Redis version=6.0.8, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 14 Sep 2020 12:59:32.350 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 14 Sep 2020 12:59:32.352 * Running mode=standalone, port=6379.
1:M 14 Sep 2020 12:59:32.352 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower 
value of 128.
1:M 14 Sep 2020 12:59:32.352 # Server initialized
1:M 14 Sep 2020 12:59:32.352 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled (set to 'madvise' or 'never').
1:M 14 Sep 2020 12:59:32.352 * Ready to accept connections

Next, let’s enable the dashboard and dns to appreciate our workload. Enable as follows

$ microk8s enable dns dashboard

Enabling Kubernetes Dashboard
Enabling Metrics-Server
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created

We will need a token to log in to the dashboard. To obtain the token, issue the following two commands.

$ token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d " " -f1)
$ microk8s kubectl -n kube-system describe secret $token

Name:         default-token-gnj26
Namespace:    kube-system
Labels:       
Annotations:  kubernetes.io/service-account.name: default
              kubernetes.io/service-account.uid: 40394cbe-7761-4de9-b49c-6d8df82aea32

Type:  kubernetes.io/service-account-token

Data

ca.crt:     1103 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6InBOVTc3NVd5ZDJHT1FYRmhWZFJ5ZlBVbVpMRWN5M1BEVDdwbE9zNU5XTDgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLWduajI2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0MDM5NGNiZS03NzYxLTRkZTktYjQ5Yy02ZDhkZjgyYWVhMzIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06ZGVmYXVsdCJ9.vHqwKlMGX650dTuChwYWsPYZFH7zRhRjuh-BEPtgYYPfrayKU08DSC5v3CixkrZH-wXZydOPit50H5SfCZPwY1TqDNCFgbz--0GnB7GhgwSoK4laXpr42Al7EBgbMWfUEAtnWXVkObUYzF31Sfhac2AnPIBp2kFlqJt8M03uoperJuFLl5x-fDacGrcXTQqvY2m5K1oE4zE38vtaJXdzgNfBMbtUrMneihoFczzOzwPLxzJJ4eZ7vAz1svG6JHO5PDDYbV0gded0egoLQkhu4Saavf8ILUjupJdYywA2VCqB6ERrrElMBHs5tYfckfyi4f6eR59_EZkf7-neCDWTAg

Copy the token and keep it in a safe place.

Next, you need to connect to the dashboard service. Although the MicroK8s snapshot will have an IP address (the cluster IP of the kubernetes-dashboard service) on your local network, you can also access the dashboard by forwarding its port to a free port on the host:

microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard --address 0.0.0.0 30560:443

Note that we added –address 0.0.0.0 so that the address can be accessed from any IP instead of the local (127.0.0.1) on the server. Now you will be able to access the dashboard through port 30560. Make sure this port is enabled in your firewall in case you set it up in your environment.

sudo firewall-cmd --permanent --add-port=30560/tcp
sudo firewall-cmd --reload

Now open a browser and point it to the ip or FQDN of the server. That is https: //[ip or FQDN]: 30560. And the following login page should be displayed. You will notice that it requires a token or Kubeconfig file. We have generated a token (get token) as described above. Just copy and paste it on the login page.

Paste token

How to install MicroK8s Kubernetes cluster on CentOS 8

And you should be introduced

How to install MicroK8s Kubernetes cluster on CentOS 8

Step 5: Add the node to your cluster

So far, we have been developing a single node (server), if you want to expand and distribute the application on two or more nodes (servers), then you will be able to use it normally. To add another node to the cluster, you only need to install Snap and MicroK8S on it, because the node has been introduced in steps 1 and 2. Please perform steps 1 and 2 on the new CentOS 8 server, and then continue the following operations.

If Firewalld is running, please allow the port

node-01 ~ $ export OPENSSL_CONF=/var/lib/snapd/snap/microk8s/current/etc/ssl/openssl.cnf
node-01 ~ $ sudo firewall-cmd --add-port={25000,10250,10255}/tcp --permanent
node-01 ~ $ sudo firewall-cmd --reload

On the master node (the node we installed first), execute the following command to get our token and join the command

$ microk8s add-node

From the node you wish to join to this cluster, run the following:
microk8s join 172.26.24.237:25000/dbb2fa9e23dfbdda83c8cb2ae53eaa53

As you can see from the above, we now have commands to run on our worker nodes to join the cluster. Copy the command without hesitation, log in to the worker node and execute it as shown below:

On the new node, execute the following command

node-01 ~ $ microk8s join 172.26.24.237:25000/dbb2fa9e23dfbdda83c8cb2ae53eaa53

Contacting cluster at 172.26.16.92
Waiting for this node to finish joining the cluster. ..

Step 6: Configure storage

MicroK8 has built-in storage, just enable it. To enable storage, add the /lib86 directory to the path of the LD_LIBRARY_PATH environment variable, and then enable storage on the master node as follows:

$ export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/lib64"
$ microk8s enable storage

deployment.apps/hostpath-provisioner created
storageclass.storage.k8s.io/microk8s-hostpath created
serviceaccount/microk8s-hostpath created
clusterrole.rbac.authorization.k8s.io/microk8s-hostpath created
clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath created
Storage will be available soon

To check if storage is enabled, we should check our Pod and make sure that the hostpath-provisioner pod is started.

$ microk8s kubectl get pods --all-namespaces

NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   kubernetes-dashboard-7ffd448895-ht2j2        1/1     Running   2          22h
kube-system   dashboard-metrics-scraper-6c4568dc68-p7q6t   1/1     Running   2          22h
kube-system   metrics-server-8bbfb4bdb-mgddj               1/1     Running   2          22h
kube-system   coredns-86f78bb79c-58sbs                     1/1     Running   2          22h
kube-system   calico-kube-controllers-847c8c99d-j84p5      1/1     Running   2          22h
kube-system   calico-node-zv994                            1/1     Running   2          21h
kube-system   hostpath-provisioner-5c65fbdb4f-llsnl        1/1     Running   0          71s <==

Confirm the created StorageClass by running the following command:

$ microk8s kubectl get storageclasses

NAME                          PROVISIONER            RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
microk8s-hostpath (default)   microk8s.io/hostpath   Delete          Immediate           false                  8m42s

As you can see, there is a storage class called "microk8s-hostpath". This is important because this name will be used when creating PersistentVolumeClaims, as shown below.

Create PersistentVolumeClaim

To create our example PersistentVolumeClaim, just open your favorite editor and add the following yaml line. Note the microk8s-hostpath on storageClassName.

$ nano sample-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: elk-data-1
spec:
  storageClassName: microk8s-hostpath
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi

Then create the PVC by running the create command as shown below. You should see the created message printed out.

$ microk8s kubectl create -f sample-pvc.yaml

persistentvolumeclaim/elk-data-1 created

To confirm that our PVC has been created, just issue the following magical MicroK8S command.​​​ Yes, PVC is really created.

$ microk8s kubectl get pvc

NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
elk-data-1   Bound    pvc-fe391d65-6909-4c76-a6c9-87b3fd5a58a1   2Gi        RWO            microk8s-hostpath   5s 

Since MicroK8S delivers persistent volumes dynamically, our PVC will create a persistent volume, as confirmed by the following command.

$ microk8s kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                STORAGECLASS       REASON   AGE
pvc-fe391d65-6909-4c76-a6c9-87b3fd5a58a1   2Gi        RWO            Delete           Bound    default/elk-data-1   microk8s-hostpath            5m 39s

Step 7: Configure the registry

The registry is basically a storage and content delivery system that contains named Docker images that can provide different tagged versions as development progress and release. MicroK8S has a built-in registry, and you only need to enable and use it. Enabling the registry is very simple, as seen in other services so far. The only thing to consider is that if it is not specified, it will choose 20G as the default size of the registry. If you wish to specify the size, you only need to add the size configuration as shown below. If you are satisfied with 20G, ignore the size option.

$ microk8s enable registry:size=25Gi

The registry is enabled
The size of the persistent volume is 25Gi

Confirm whether the registry pane is deployed

$ microk8s kubectl get pods --all-namespaces

NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE  
kube-system          kubernetes-dashboard-7ffd448895-ht2j2        1/1     Running   2          22h  
kube-system          dashboard-metrics-scraper-6c4568dc68-p7q6t   1/1     Running   2          22h  
kube-system          metrics-server-8bbfb4bdb-mgddj               1/1     Running   2          22h  
kube-system          coredns-86f78bb79c-58sbs                     1/1     Running   2          22h  
kube-system          calico-kube-controllers-847c8c99d-j84p5      1/1     Running   2          22h  
kube-system          calico-node-zv994                            1/1     Running   2          22h  
kube-system          hostpath-provisioner-5c65fbdb4f-llsnl        1/1     Running   0          52m  
container-registry   registry-9b57d9df8-6djrn                     1/1     Running   0          3m34s <==

To test the performance of the newly created registry, we will install Podman, extract the image and push it to the local registry. All commands are as follows:

$ sudo dnf -y install podman
$ podman pull redis

Confirm that the image has been pulled out

$ podman images

REPOSITORY                TAG      IMAGE ID       CREATED      SIZE         9ueue state U
docker.io/library/redis   latest   84c5f6e03bf0   5 days ago   108 MB

As you can see, our image comes from the docker.io repository.

Next, edit the Podman configuration file and include the local registry in the following location [registries.insecure] Because we will not use any certificates. Make sure to add the IP or hostname of the server so that other nodes in the cluster can access it. The registry listens on port 32000, and we have opened the registry in the firewall in step 2.

$ sudo vim /etc/containers/registries.conf
[registries.insecure]
registries = ['172.26.16.92', '127.0.0.1']

As you can see from the Podman images command above, our image comes from the docker.io repository, as described above. Let's mark it and customize it so that it matches and stores it in our local registry.

$ podman tag 84c5f6e03bf0 172.26.16.92:32000/custom-redis:geeksregistry
$ podman push 172.26.16.92:32000/custom-redis:geeksregistry

Getting image source signatures
Copying blob ea96cbf71ac4 done
Copying blob 2e9c060aef92 done
Copying blob 7fb1fa4d4022 done
Copying blob 07cab4339852 done
Copying blob 47d8fadc6714 done
Copying blob 45b5e221b672 done
Copying config 84c5f6e03b done
Writing manifest to image destination
Storing signatures

Run the podman images command again to confirm the changes.

$ podman images

REPOSITORY                        TAG             IMAGE ID       CREATED      SIZE  
172.26.16.92:32000/custom-redis   geeksregistry   84c5f6e03bf0   5 days ago   108 MB
docker.io/library/redis           latest          84c5f6e03bf0   5 days ago   108 MB

Log in to the Worker node and extract the image

Now, we are ready to extract the image from the local registry that we just enabled. Log in to your worker node or any node where podman is installed, and then try to extract the image from the master server. If you don't have podman installed, just issue the following command.

node-01 ~ $ sudo dnf install -y podman

Again on the worker node or on any server you wish to pull the image from, edit the podman configuration file and include the local registry in the following location [registries.insecure] Because we will not use any certificates.

$ sudo vim /etc/containers/registries.conf

[registries.insecure]
registries = ['172.26.16.92', '127.0.0.1']

After everything is done, let us now try to extract the image from the MicroK8S registry.

node-01 ~ $ podman pull 172.26.16.92:32000/custom-redis:geeksregistry

Trying to pull 172.26.16.92:32000/custom-redis:geeksregistry...
Getting image source signatures
Copying blob 08c34a4060bc done
Copying blob 50fae304733d done
Copying blob 8de9fbb8976d done
Copying blob 72c3268a4367 done
Copying blob edbd7b7fe272 done
Copying blob b6c3777aabad done
Copying config 84c5f6e03b done
Writing manifest to image destination
Storing signatures
84c5f6e03bf04e139705ceb2612ae274aad94f8dcf8cc630fbf6d91975f2e1c9

Check image details

$ podman images

REPOSITORY                        TAG             IMAGE ID       CREATED      SIZE  
172.26.16.92:32000/custom-redis   geeksregistry   84c5f6e03bf0   5 days ago   108 MB

Now we have a working registry! Next, we will configure logging and monitoring on the MicroK8s cluster.

Step 8: Use FluentD, Elasticsearch and Kibana to enable logging

MicroK8s comes with a package called fluentd that automatically deploys FluentD, Elasticsearch and Kibana (EFK)! ! This makes it very easy to enable login in the cluster using the available mature (EFK) tools, which makes it even more adorable.

In order for EFK to start normally without errors, you need at least 8GB of memory and 4vCPU. If you have limited memory, you can edit its state set, as it will be displayed after fluent is enabled.

Enable fluency as follows:

$ microk8s enable fluentd

Enabling Fluentd-Elasticsearch
Labeling nodes
node/master labeled
Addon dns is already enabled.
Adding argument --allow-privileged to nodes.
service/elasticsearch-logging created
serviceaccount/elasticsearch-logging created
clusterrole.rbac.authorization.k8s.io/elasticsearch-logging created
clusterrolebinding.rbac.authorization.k8s.io/elasticsearch-logging created
statefulset.apps/elasticsearch-logging created
configmap/fluentd-es-config-v0.2.0 created
serviceaccount/fluentd-es created
clusterrole.rbac.authorization.k8s.io/fluentd-es created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es created
daemonset.apps/fluentd-es-v3.0.2 created
deployment.apps/kibana-logging created
service/kibana-logging created
Fluentd-Elasticsearch is enabled

If you see that elasticsearch-logging-0 is pending and its status has not changed and is in CrashLoopBackOff along with kibana-logging, please log in to the dashboard and click on the elasticsearch-logging-0 pane so we can view it event. If you see "0/1 nodes available: 1 insufficient memory. microk8s" error, then continue editing Stateful set As follows.

$ microk8s kubectl get pods --all-namespaces

NAMESPACE            NAME                                         READY   STATUS              RESTARTS   AGE
kube-system          kubernetes-dashboard-7ffd448895-ht2j2        1/1     Running             2          24h
kube-system          dashboard-metrics-scraper-6c4568dc68-p7q6t   1/1     Running             2          24h
kube-system          metrics-server-8bbfb4bdb-mgddj               1/1     Running             2          24h
kube-system          coredns-86f78bb79c-58sbs                     1/1     Running             2          24h
kube-system          calico-node-zv994                            1/1     Running             2          24h
kube-system          hostpath-provisioner-5c65fbdb4f-llsnl        1/1     Running             0          156m 
container-registry   registry-9b57d9df8-6djrn                     1/1     Running             0          107m 
kube-system          calico-kube-controllers-847c8c99d-j84p5      1/1     Running             2          24h  
kube-system          elasticsearch-logging-0                      0/1     Pending             0          4m57s <==
kube-system          kibana-logging-7cf6dc4687-bvk46              0/1     ContainerCreating   0          4m57s
kube-system          fluentd-es-v3.0.2-lj7m8                      0/1     Running             1          4m57s

How to install MicroK8s Kubernetes cluster on CentOS 8

After editing, delete the elasticsearch-logging-0 pane so that it can be recreated with the new configuration changes. Give MicroK8s time to pull and deploy the pod. Later, everything should behave as follows. Please note that if you have enough memory and CPU, you are highly unlikely to encounter these errors, because the elasticsearch pod requires 3GB of memory by default.

microk8s kubectl get pods --all-namespaces
NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
kube-system          metrics-server-8bbfb4bdb-mgddj               1/1     Running   3          40h
kube-system          dashboard-metrics-scraper-6c4568dc68-p7q6t   1/1     Running   3          40h
kube-system          kubernetes-dashboard-7ffd448895-ht2j2        1/1     Running   3          40h
kube-system          hostpath-provisioner-5c65fbdb4f-llsnl        1/1     Running   1          18h
kube-system          coredns-86f78bb79c-58sbs                     1/1     Running   3          40h
container-registry   registry-9b57d9df8-6djrn                     1/1     Running   1          18h
kube-system          calico-kube-controllers-847c8c99d-j84p5      1/1     Running   3          41h
kube-system          calico-node-zv994                            1/1     Running   3          40h
kube-system          elasticsearch-logging-0                      1/1     Running   0          20m <==
kube-system          fluentd-es-v3.0.2-j4hxt                      1/1     Running   10         25m <==
kube-system          kibana-logging-7cf6dc4687-mpsx2              1/1     Running   10         25m <==

Visit Kibana

After the pod is running normally, we want to access the Kibana interface to configure the index and start analyzing logs. For this, let us understand the details of kibana, fluent and elasticsearch. Issue the cluster-info command as follows:

$ microk8s kubectl cluster-info

Kubernetes master is running at https://127.0.0.1:16443
CoreDNS is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
Elasticsearch is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
Kibana is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/kibana-logging/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Although the MicroK8s snapshot will have an IP address (the cluster IP of the kibana logging service) on your local network, you can also forward Kibana's port to a free port on the host to access Kibana in the following ways:

$ microk8s kubectl port-forward -n kube-system service/kibana-logging --address 0.0.0.0 8080:5601

You can check the service in the namespace where EFK is deployed by issuing the following command. You will see that the "kibana-logging" service is listening on port 5601 internally, which we have forwarded to the free port on the server.

kubectl get services -n kube-system
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
kube-dns                    ClusterIP   10.152.183.10            53/UDP,53/TCP,9153/TCP   42h
metrics-server              ClusterIP   10.152.183.8             443/TCP                  42h
kubernetes-dashboard        ClusterIP   10.152.183.88            443/TCP                  42h
dashboard-metrics-scraper   ClusterIP   10.152.183.239           8000/TCP                 42h
elasticsearch-logging       ClusterIP   10.152.183.64            9200/TCP                 48m
kibana-logging              ClusterIP   10.152.183.44            5601/TCP                 48m <==

Our Kibana is now listening on port 8080. If this port is running in your server, please allow this port on the firewall, as shown in the following figure:

sudo firewall-cmd --add-port=8080/tcp --permanent
sudo firewall-cmd --reload

After success, open the browser and access the Kibana interface by pointing to the following URL http: //[IP or FQDN]: 8080. You should see the interface shown below.

How to install MicroK8s Kubernetes cluster on CentOS 8

Click "Explore by yourself"

Create "index mode". Due to fluent, Logstash should display by default. Select it and create as follows

How to install MicroK8s Kubernetes cluster on CentOS 8

Select @timestamp on the time filter and click "Create Index Mode"

How to install MicroK8s Kubernetes cluster on CentOS 8

How to install MicroK8s Kubernetes cluster on CentOS 8

After creating the index mode, click "Find"Icon, you should see a match as shown below.

How to install MicroK8s Kubernetes cluster on CentOS 8

Step 9: Enable Prometheus

MicroK8 comes with built-in Prometheus"Module", just enable it. This module unpacks Prometheus with the amazing Grafana. There is nothing better than this! Enable it because others have enabled the following features:

$ microk8s enable prometheus

Then check if they are being deployed

E                                         READY   STATUS              RESTARTS   AGE
kube-system          metrics-server-8bbfb4bdb-mgddj               1/1     Running             4          42h
kube-system          dashboard-metrics-scraper-6c4568dc68-p7q6t   1/1     Running             4          42h
kube-system          kubernetes-dashboard-7ffd448895-ht2j2        1/1     Running             4          42h
kube-system          hostpath-provisioner-5c65fbdb4f-llsnl        1/1     Running             2          20h
container-registry   registry-9b57d9df8-6djrn                     1/1     Running             2          19h
kube-system          elasticsearch-logging-0                      1/1     Running             0          39m
kube-system          kibana-logging-7cf6dc4687-6b48m              1/1     Running             0          38m
kube-system          calico-node-zv994                            1/1     Running             4          42h
kube-system          calico-kube-controllers-847c8c99d-j84p5      1/1     Running             4          42h
kube-system          fluentd-es-v3.0.2-pkcjh                      1/1     Running             0          38m  
kube-system          coredns-86f78bb79c-58sbs                     1/1     Running             4          42h  
monitoring           kube-state-metrics-66b65b78bc-txzpm          0/3     ContainerCreating   0          2m45s <==
monitoring           node-exporter-dq4hv                          0/2     ContainerCreating   0          2m45s <==
monitoring           prometheus-adapter-557648f58c-bgtkw          0/1     ContainerCreating   0          2m44s <==
monitoring           prometheus-operator-5b7946f4d6-bdgqs         0/2     ContainerCreating   0          2m51s <==
monitoring           grafana-7c9bc466d8-g4vss                     1/1     Running             0          2m45s <==

Access the Prometheus web interface

Similar to the way to access the previous web interface, we ported the internal Pod port to a free port on the server. As we confirmed in the previous command, Prometheus is deployed under the name "monitor". We can get all services in this namespace as follows:

$ microk8s kubectl get services -n monitoring

kubectl get services -n monitoring
NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
prometheus-operator     ClusterIP   None                    8443/TCP                     7m41s
alertmanager-main       ClusterIP   10.152.183.34           9093/TCP                     7m36s
grafana                 ClusterIP   10.152.183.35           3000/TCP                     7m35s
kube-state-metrics      ClusterIP   None                    8443/TCP,9443/TCP            7m35s
node-exporter           ClusterIP   None                    9100/TCP                     7m35s
prometheus-adapter      ClusterIP   10.152.183.22           443/TCP                      7m34s
prometheus-k8s          ClusterIP   10.152.183.27           9090/TCP                     7m33s
alertmanager-operated   ClusterIP   None                    9093/TCP,9094/TCP,9094/UDP   4m
prometheus-operated     ClusterIP   None                    9090/TCP                     3m59s

Let's port Prometheus and access it from a browser.

$ microk8s kubectl port-forward -n monitoring service/prometheus-k8s --address 0.0.0.0 9090:9090
Forwarding from 0.0.0.0:9090 -> 9090

Then point your browser to the IP address or FQDN of the server on port 9090, that is http:[IP or FQDN]: 9090. As always, if you have a firewall running on your CentOS 8 machine, please allow it. Since we will also forward Grafana on port 3000, we will also add this port.

sudo firewall-cmd --add-port={9090,3000}/tcp --permanent
sudo firewall-cmd --reload

How to install MicroK8s Kubernetes cluster on CentOS 8

Access the Grafana web interface

In a similar way, as before, forward Grafana on port 3000.

$ microk8s kubectl port-forward -n monitoring service/grafana --address 0.0.0.0 3000:3000

Forwarding from 0.0.0.0:3000 -> 3000

Then point the browser to the IP address or FQDN of the server on port 3000, which is http:[IP or FQDN]: 3000. As shown in the image below, you should see the beautiful Grafana dashboard. The default username and password are "administrator"with"administrator". You will be prompted to make changes immediately. Enter your new password, then submit, and you will be allowed to enter.

How to install MicroK8s Kubernetes cluster on CentOS 8

Enter new credentials

How to install MicroK8s Kubernetes cluster on CentOS 8

And you should be allowed in

How to install MicroK8s Kubernetes cluster on CentOS 8

For other information about MicroK8, please follow This official MicroK8S website.

in conclusion

The setup voyage is very long, and some challenges were encountered in the process, but we successfully deployed MicroK8 along with logs, monitoring, dashboards and the rest. We hope this guide is helpful to you, if you find any errors, please let us know. We have always been honored for your unremitting support, and we highly appreciate it. Cheers to everyone who has worked tirelessly to create tools used by developers and engineers around the world. For other guides similar to this guide, the list shared below will help you.

Installation lens-best Kubernetes dashboard and IDE

Use K3 to deploy a lightweight Kubernetes cluster in 5 minutes

Use kubeadm to install a Kubernetes cluster on Ubuntu

Use kubeadm to set up a Kubernetes cluster on CentOS 7

To
You can download this article in PDF format via the link below to support us.

Download the guide in PDF format

turn off
To

To
To

Sidebar