Use Splunk forwarder to send logs to Splunk on Kubernetes

You can download this article in PDF format via the link below to support us.
Download the guide in PDF format turn off

Logging is a useful mechanism for application developers and cluster administrators. It helps to monitor and solve application problems. By default, containerized applications will write to standard output. These logs are stored in local temporary storage. Once they are thrown away, they are lost. In order to solve this problem, records are often used in persistent storage. It can then be routed to central systems of record, such as Splunk and Elasticsearch.

In this blog, we will study the use of Split general freight forwarder Send data to splunk. It only contains the basic tools needed to forward data. It is designed to run with minimal CPU and memory. Therefore, it can be easily deployed as a side car container in a kubernetes cluster. The universal transponder has a configuration to determine the sending data and sending location. After the data is forwarded to the hybrid indexer, it can be searched.

The following figure shows the high-level architecture of how splunk works:

Benefits of using Splunk universal forwarder

  • It can aggregate data from different input types
  • It supports automatic load balancing. By buffering data when necessary and sending it to an available indexer, flexibility is increased.
  • The deployment server can be managed remotely. All management activities can be done remotely.
  • Splunk universal transponder provides a reliable and secure data collection process.
  • The scalability of Splunk universal transponder is very flexible.


Before we continue, the following conditions need to be met:

  1. Running Kubernetes or Openshift container platform cluster
  2. Kubectl Either OC Command line tools installed on the workstation.You should have administrative rights
  3. Working branch cluster with two or more indexers

Step 1: Create a persistent volume

If the persistent volume does not yet exist, we will deploy it first.The following configuration file uses storage class Cephalosporin . You will need to change the configuration accordingly. The following guidelines can be used to set up a ceph cluster and deploy storage classes:

  • Install Ceph 15 (Octopus) storage cluster on Ubuntu
  • Ceph persistent storage for Kubernetes using Cephfs

Create a persistent volume statement:

                        $ vim pvc_claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
  name: cephfs-claim
    - ReadWriteOnce
  storageClassName: cephfs
      storage: 1Gi


Create a persistent volume statement:

                        $ kubectl apply -f pvc_claim.yaml

View PersistentVolumeClaim:

                        $ kubectl get pvc cephfs-claim
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cephfs-claim     Bound    pvc-19c8b186-699b-456e-afdc-bcbaba633c98   1Gi       RWX            cephfs          3s

Step 2: Deploy the application and mount the persistent volume

Next, we will deploy our application. Note that we installed the path “/usr/share/nginx/html” to the permanent volume. This is the data we need to keep.

                        $ vim nginx.yaml
apiVersion: v1
kind: Pod
  name: nginx
    - name: storage
        claimName: cephfs-claim
    - name: nginx-app
      image: nginx
        - containerPort: 80
          name: "http-server"
        - mountPath: "/usr/share/nginx/html"
          name: storage


Step 3: Create a configuration diagram

Then, we will deploy the configmap that will be used by our container. configmap has two key configurations:

  • Inputs.conf : This includes the configuration of forwarding data.
  • Outputs.conf : Contains the configuration about forwarding data to.

You will need to change the configmap configuration to suit your needs.

                        $ vim configmap.yaml
kind: ConfigMap
apiVersion: v1
  name: configs
  outputs.conf: |-
    index = false

    defaultGroup = splunk-uat
    forwardedindex.filter.disable = true
    indexAndForward = false

    server =
  # Splunk indexer IP and Port
    useACK = true
    autoLB = true

  inputs.conf: |-
  # Where data is read from
    disabled = false
    sourcetype = log
    index = sfc_microservices_uat  # This index should already be created on the splunk environment

Deployment configuration diagram:

                        $ kubectl apply -f configmap.yaml

Step 4: Deploy Splunk universal forwarder.

Finally, we will deploy inside The container and stray general freight container together. This will help copy the configmap configuration content into the splunk universal forwarder container.

                        $ vim  splunk_forwarder.yaml
apiVersion: apps/v1
kind: Deployment
  name: splunkforwarder
    app: splunkforwarder
  replicas: 2
      app: splunkforwarder
        app: splunkforwarder
       - name: volume-permissions
         image: busybox
         imagePullPolicy: IfNotPresent
         command: ['sh', '-c', 'cp /configs/* /opt/splunkforwarder/etc/system/local/']
         - mountPath: /configs
           name: configs
         - name: confs
           mountPath: /opt/splunkforwarder/etc/system/local
       - name: splunk-uf
         image: splunk/universalforwarder:latest
         imagePullPolicy: IfNotPresent
         - name: SPLUNK_START_ARGS
           value: --accept-license
         - name: SPLUNK_PASSWORD
           value: *****
         - name: SPLUNK_USER
           value: splunk
         - name: SPLUNK_CMD
           value: add monitor /var/log/
         - name: container-logs
           mountPath: /var/log
         - name: confs
           mountPath: /opt/splunkforwarder/etc/system/local
       - name: container-logs
            claimName: cephfs-claim
       - name: confs
         emptyDir: {}
       - name: configs
           name: configs
           defaultMode: 0777

Deployment container:

                        $ kubectl apply -f splunk_forwarder.yaml

Verify that the splunk universal transponder pod is running:

                        $ kubectl get pods | grep splunkforwarder
splunkforwarder-6877ffd464-l5bvh                  1/1     Running   0       30s
splunkforwarder-6877ffd464-ltbdr                  1/1     Running   0       31s

Step 5: Check whether the log is written to splunk

Log in to splunk and perform a search to verify if there are logs coming in. Splunk search

You should be able to see your logs.

Related guidelines:

How to send OpenShift logs and events to Splunk

How to stream logs in AWS from CloudWatch to ElasticSearch

How to ship Kubernetes logs to external Elasticsearch

You can download this article in PDF format via the link below to support us.
Download the guide in PDF format turn off

Related Posts