How to send Kubernetes logs to external Elasticsearch

To
You can download this article in PDF format via the link below to support us.

Download the guide in PDF format

turn off
To

To
To

After setting up a Kubernetes cluster in the cloud or local environment, you will need to check what is happening in the warehouse in an easy and flexible way. One of the best ways is to investigate the log when you need to fix it or understand what happened at a specific time. Although it is possible to log in to the cluster and check the Pod or host logs, it suddenly becomes troublesome to check the logs of each Pod one by one, especially when there are many Pods in k8. To make it easier for you to check the status of the cluster on a platform, we will deploy Elasticsearch and Kibana on an external server, and then use Elastic’s beat (Filebeat, Metricbeat, etc.) to send logs from the cluster to Elasticsearch. If you are already running the ELK stack, even better.

The figure below shows the architecture that we will complete in this guide. It is actually a 3-node Kubernetes cluster and an Elasticsearch and Kibana server, which will receive logs from the cluster through Filebeat and Metricbeat log collectors.

First, we will need to install Kibana’s Elasticsearch server at the same time. You can omit Logstash, but if you need to filter the logs further, you can install it. Please follow the guide below to install Elasticsearch and Kibana:

How to install ElasticSearch 7.x on CentOS 7

How to install Elasticsearch 7 on Debian

How to install Elasticsearch 7, 6, 5 on Ubuntu

On your Elasticsearch host, make sure it can be accessed from the outside. You can edit the following parts in the configuration

$ sudo vim /etc/elasticsearch/elasticsearch.yml

# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#

Then allow the port on the firewall

sudo firewall-cmd --permanent --add-port=9200/tcp
sudo firewall-cmd --reload

Second, you must have a Kubernetes cluster because it is where we focus on getting logs from it. We have some guides that can help you set up a guide in case you need to guide one quickly. They are shared below:

Use kubeadm to install a Kubernetes cluster on Ubuntu

Use kubeadm to install a Kubernetes cluster on CentOS 7

Use EKS to easily set up a Kubernetes cluster on AWS

Use Ansible and Kubespray to deploy a Kubernetes cluster

Use Rancher RKE to install a production Kubernetes cluster

When ready, we can continue to install Filebeat and Metricbeat pods in the cluster to start collecting logs and sending them to ELK. Make sure you can run kubectl commands in the Kubernetes cluster.

Step 1: Download the sample Filebeat and Metricbeat files

Log in to your Kubernetes master node and run the following commands to get the Filebeat and Metricbeat yaml files provided by Elastic.

cd ~
curl -L -O https://raw.githubusercontent.com/elastic/beats/7.9/deploy/kubernetes/filebeat-kubernetes.yaml
curl -L -O https://raw.githubusercontent.com/elastic/beats/7.9/deploy/kubernetes/metricbeat-kubernetes.yaml

Step 2: Edit the file to suit your environment

In these two files, we only need to change some content. Under ConfigMap, you will find the elasticseach output as shown below. Change the IP (192.168.10.123) and port (9200) to the IP of your Elasticsearch server.

    output.elasticsearch:
      hosts: ['${ELASTICSEARCH_HOST:192.168.10.123}:${ELASTICSEARCH_PORT:9200}']
      #username: ${ELASTICSEARCH_USERNAME}
      #password: ${ELASTICSEARCH_PASSWORD}

Under DaemonSet in the same file, you will find the following configuration. Please note that we are displaying the area to be changed. Edit the IP (192.168.10.123) and port (9200) so that it also matches the IP address of the Elastcsearch server. If you have configured a username and password for Elasticsearch, you can add them at will in the comment section shown.

        env:
        - name: ELASTICSEARCH_HOST
          value: "192.168.10.123"
        - name: ELASTICSEARCH_PORT
          value: "9200"
        #- name: ELASTICSEARCH_USERNAME
        #  value: elastic
        #- name: ELASTICSEARCH_PASSWORD
        #  value: changeme
        - name: ELASTIC_CLOUD_ID

Please note that if you wish to deploy Filebeat and Metricbeat resources on another namespace, just edit “Cooper System“As one of your choices.

Under “Deployment”, you can change the version of Filebeat and Metricbeat to be deployed by editing the image shown on the following code snippet (docker.elastic.co/beats/metricbeat:7.9.0). I will use version 7.9.0.

###For Metricbeat####
    spec:
      serviceAccountName: metricbeat
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:7.9.0

If you also want to change its version, please do the same with the filebeat yml file.

###For Filebeat####
    spec:
      serviceAccountName: metricbeat
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/filebeat:7.9.0

Important note

If you wish to deploy the beat on the master node, we will have to add tolerance. An example of metricbeat is shown below. That’s not the entire DaemonSet configuration. Only the part that interests us. You can keep the other parts intact. Add tolerances as shown in the configuration under the specifications below. According to your needs, the same method can be used in the file shooting configuration.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: metricbeat
  namespace: kube-system
  labels:
    k8s-app: metricbeat
spec:
  selector:
    matchLabels:
      k8s-app: metricbeat
  template:
    metadata:
      labels:
        k8s-app: metricbeat
    spec:
###PART TO EDIT###
      # This toleration is to have the daemonset runnable on master nodes
      # Remove it if your masters can't run pods
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
####END OF EDIT###
      serviceAccountName: metricbeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:7.9.0
        args: [
          "-c", "/etc/metricbeat.yml",
          "-e",

Step 4: Deploy to Kubernetes

After all the edits are completed, our Elasticsearch can be well accessed from your Kubernetes cluster, and it is time to deploy our cadence. Log in to your master node and run the following command:

kubectl apply -f metricbeat-kubernetes.yaml
kubectl apply -f filebeat-kubernetes.yaml

After a period of time, confirm that the Pod has been deployed and successfully operated.

$ kubectl get pods -n kube-system

NAME                                             READY   STATUS    RESTARTS   AGE
calico-kube-controllers-c9784d67d-k85hf          1/1     Running   5          11d
calico-node-brjnk                                1/1     Running   7          10d
calico-node-nx869                                1/1     Running   1          10d
calico-node-whlzf                                1/1     Running   6          11d
coredns-f9fd979d6-6vztd                          1/1     Running   5          11d
coredns-f9fd979d6-8gz4l                          1/1     Running   5          11d
etcd-kmaster.diab.mfs.co.ke                      1/1     Running   5          11d
filebeat-hlzhc                                   1/1     Running   7          7d23h <==
filebeat-mcs67                                   1/1     Running   1          7d23h <==
kube-apiserver-kmaster.diab.mfs.co.ke            1/1     Running   5          11d
kube-controller-manager-kmaster.diab.mfs.co.ke   1/1     Running   5          11d
kube-proxy-nlrbv                                 1/1     Running   5          11d
kube-proxy-zdcbg                                 1/1     Running   1          10d
kube-proxy-zvf6c                                 1/1     Running   7          10d
kube-scheduler-kmaster.diab.mfs.co.ke            1/1     Running   5          11d
metricbeat-5fw98                                 1/1     Running   7          8d  <==
metricbeat-5zw9b                                 1/1     Running   0          8d  <==
metricbeat-jbppx                                 1/1     Running   1          8d  <==

Step 5: Create an index on Kibana

Once our Pods are running, they will immediately send the index pattern along with the logs to Elasticsearch. Log in to your Kibana and click "Stack management" > "Index management", you should be able to see your index.

How to send Kubernetes logs to external Elasticsearch

Click "Index management"

How to send Kubernetes logs to external Elasticsearch

And our index.

How to send Kubernetes logs to external Elasticsearch

To create an index pattern, click "Index modes", then click "Create index mode".

How to send Kubernetes logs to external Elasticsearch

On the next page, type the names of the index patterns that match filebeat or metricbeat, and they should show as matched.

How to send Kubernetes logs to external Elasticsearch

Create a pattern and click "Next step"

How to send Kubernetes logs to external Elasticsearch

Select "@timestamp" in the drop-down menu, then select "Create index mode"

How to send Kubernetes logs to external Elasticsearch

Step 6: Discover your information

After creating the index mode, click "Find",

How to send Kubernetes logs to external Elasticsearch

Then select the index mode we created.

How to send Kubernetes logs to external Elasticsearch

in conclusion

Now that we have a lightweight beat, we can get logs and metrics from your Kubernetes cluster and send them to external Elasticsearch for indexing and flexible search.

To
You can download this article in PDF format via the link below to support us.

Download the guide in PDF format

turn off
To

To
To

Sidebar