Use Splunk forwarder to send logs to Splunk on Kubernetes

You can download this article in PDF format via the link below to support us.
Download the guide in PDF formatturn off

Logging is a useful mechanism for application developers and cluster administrators. It helps to monitor and solve application problems. By default, containerized applications will write to standard output. These logs are stored in local temporary storage. Once they are thrown away, they are lost. In order to solve this problem, records are often used in persistent storage. It can then be routed to central systems of record, such as Splunk and Elasticsearch.

In this blog, we will study the use of Split general freight forwarder Send data to splunk. It only contains the basic tools needed to forward data. It is designed to run with minimal CPU and memory. Therefore, it can be easily deployed as a side car container in a kubernetes cluster. The universal transponder has a configuration to determine the sending data and sending location. After the data is forwarded to the hybrid indexer, it can be searched.

The following figure shows the high-level architecture of how splunk works:

Benefits of using Splunk universal forwarder

  • It can aggregate data from different input types
  • It supports automatic load balancing. By buffering data when necessary and sending it to an available indexer, flexibility is increased.
  • The deployment server can be managed remotely. All management activities can be done remotely.
  • Splunk universal transponder provides a reliable and secure data collection process.
  • The scalability of Splunk universal transponder is very flexible.

prerequisites:

Before we continue, the following conditions need to be met:

  1. Running Kubernetes or Openshift container platform cluster
  2. Kubectl Either OC Command line tools installed on the workstation.You should have administrative rights
  3. Working branch cluster with two or more indexers

Step 1: Create a persistent volume

If the persistent volume does not yet exist, we will deploy it first.The following configuration file uses storage class Cephalosporin. You will need to change the configuration accordingly. The following guidelines can be used to set up a ceph cluster and deploy storage classes:

  • Install Ceph 15 (Octopus) storage cluster on Ubuntu
  • Ceph persistent storage for Kubernetes using Cephfs

Create a persistent volume statement:

$ vim pvc_claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cephfs-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: cephfs
  resources:
    requests:
      storage: 1Gi

Create a persistent volume statement:

$ kubectl apply -f pvc_claim.yaml

View PersistentVolumeClaim:

$ kubectl get pvc cephfs-claim
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cephfs-claim     Bound    pvc-19c8b186-699b-456e-afdc-bcbaba633c98   1Gi       RWX            cephfs          3s

Step 2: Deploy the application and mount the persistent volume

Next, we will deploy our application. Note that we installed the path “/usr/share/nginx/html” to the permanent volume. This is the data we need to keep.

$ vim nginx.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  volumes:
    - name: storage
      persistentVolumeClaim:
        claimName: cephfs-claim
  containers:
    - name: nginx-app
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: storage

Step 3: Create a configuration diagram

Then, we will deploy the configmap that will be used by our container. configmap has two key configurations:

  • Inputs.conf: This includes the configuration of forwarding data.
  • Outputs.conf : Contains the configuration about forwarding data to.

You will need to change the configmap configuration to suit your needs.

$ vim configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: configs
data:
  outputs.conf: |-
    [indexAndForward]
    index = false

    [tcpout]
    defaultGroup = splunk-uat
    forwardedindex.filter.disable = true
    indexAndForward = false

    [tcpout:splunk-uat]
    server = 172.29.127.2:9997
  # Splunk indexer IP and Port
    useACK = true
    autoLB = true

  inputs.conf: |-
    [monitor:///var/log/*.log]
  # Where data is read from
    disabled = false
    sourcetype = log
    index = sfc_microservices_uat  # This index should already be created on the splunk environment

Deployment configuration diagram:

$ kubectl apply -f configmap.yaml

Step 4: Deploy Splunk universal forwarder.

Finally, we will deploy inside The container and stray general freight container together. This will help copy the configmap configuration content into the splunk universal forwarder container.

$ vim  splunk_forwarder.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: splunkforwarder
  labels:
    app: splunkforwarder
spec:
  replicas: 2
  selector:
    matchLabels:
      app: splunkforwarder
  template:
    metadata:
      labels:
        app: splunkforwarder
    spec:
      initContainers:
       - name: volume-permissions
         image: busybox
         imagePullPolicy: IfNotPresent
         command: ['sh', '-c', 'cp /configs/* /opt/splunkforwarder/etc/system/local/']
         volumeMounts:
         - mountPath: /configs
           name: configs
         - name: confs
           mountPath: /opt/splunkforwarder/etc/system/local
      containers:
       - name: splunk-uf
         image: splunk/universalforwarder:latest
         imagePullPolicy: IfNotPresent
         env:
         - name: SPLUNK_START_ARGS
           value: --accept-license
         - name: SPLUNK_PASSWORD
           value: *****
         - name: SPLUNK_USER
           value: splunk
         - name: SPLUNK_CMD
           value: add monitor /var/log/
         volumeMounts:
         - name: container-logs
           mountPath: /var/log
         - name: confs
           mountPath: /opt/splunkforwarder/etc/system/local
      volumes:
       - name: container-logs
         persistentVolumeClaim:
            claimName: cephfs-claim
       - name: confs
         emptyDir: {}
       - name: configs
         configMap:
           name: configs
           defaultMode: 0777

Deployment container:

$ kubectl apply -f splunk_forwarder.yaml

Verify that the splunk universal transponder pod is running:

$ kubectl get pods | grep splunkforwarder
splunkforwarder-6877ffd464-l5bvh                  1/1     Running   0       30s
splunkforwarder-6877ffd464-ltbdr                  1/1     Running   0       31s

Step 5: Check whether the log is written to splunk

Log in to splunk and perform a search to verify if there are logs coming in.Splunk search

You should be able to see your logs.

Related guidelines:

How to send OpenShift logs and events to Splunk

How to stream logs in AWS from CloudWatch to ElasticSearch

How to ship Kubernetes logs to external Elasticsearch

You can download this article in PDF format via the link below to support us.
Download the guide in PDF formatturn off

Sidebar