How to send OpenShift logs and events to Splunk

You can download this article in PDF format via the link below to support us.
Download the guide in PDF formatturn off

As a cluster administrator, you will definitely want to aggregate all logs from the OpenShift Container Platform cluster, such as container logs, node system logs, application container logs, etc. In this article, we will arrange the cluster logging Pod and other necessary resources to support sending logs, events, and cluster metrics to Splunk.

We will use Splunk Connect for Kubernetes This provides a way to import and search OpenShift or Kubernetes log, object, and metric data in Splunk. Splunk Connect for Kubernetes utilizes and supports multiple CNCF components when developing these tools to import data into Splunk.

Set up requirements

For this setup, you need the following items.

  • Use OpenShift cluster OC The command line tool is configured. Administrator rights are required.
  • Splunk Enterprise 7.0 or higher
  • The helmet is installed in your workstation
  • At least two Splunk indexes
  • One Hongkong Electric The token used by the HTTP event collector to verify event data

For this reason, there will be three deployment types on OpenShift.

  1. Deployment is used to collect changes in OpenShift objects.
  2. There is a DaemonSet on each OpenShift node to collect metrics.
  3. Each OpenShift node has a DaemonSet for collecting logs.

The actual implementation will be shown in the figure below.

Step 1: Create Helm Index

You will need at least two indexes for this deployment. One is for logs and events, and the other is for metrics.

Log in to Splunk as an administrator user:How to send OpenShift logs and events to Splunk

Create event and log indexes.The input data type should be activity.How to send OpenShift logs and events to Splunk

For indicator index, the input data type can be index.How to send OpenShift logs and events to Splunk

Confirm that the index is available.How to send OpenShift logs and events to Splunk

Step 2: Create Splunk HEC credentials

The HTTP Event Collector (HEC) allows you to send data and application events to Splunk deployments via HTTP and secure HTTP (HTTPS) protocols. Since HEC uses a token-based authentication model, we need to generate new tokens.

This is at data input Configuration part.How to send OpenShift logs and events to Splunk

select “HTTP Event Collector” Then fill in the name and click Next.How to send OpenShift logs and events to Splunk

On the next page, allow tokens to be written to the two indexes we created.How to send OpenShift logs and events to Splunk

Review and submit the settings.How to send OpenShift logs and events to Splunk

Step 3: Install the helmet

If your workstation or bastion server does not have a helmet installed, please check the guide in the link below.

Install and use Helm 3 on a Kubernetes cluster

You can verify the installation by checking the available version of the helmet.

$ helm version
version.BuildInfo{Version:"v3.4.0", GitCommit:"7090a89efc8a18f3d8178bf47d2462450349a004", GitTreeState:"clean", GoVersion:"go1.14.10"}

Step 4: Deploy Splunk Connect for Kubernetes

Create a namespace for the Splunk connect namespace.

$ oc new-project splunk-hec-logging

After creating the project, it should be your current work project. But you can also switch to the project at any time.

$ oc project splunk-hec-logging

Create a value yaml file for installation.

$ vim ocp-splunk-hec-values.yaml

Mine has been modified to resemble the following.

global:
  logLevel: info
  journalLogPath: /run/log/journal
  splunk:
    hec:
      host: <splunk-ip> # Set Splunk IP address
      port: <splunk-hec-port> # Set Splunk HEC port
      protocol: http
      token: <hec-token> # Hec token created
      insecureSSL: true
      indexName: <indexname> # default index if others not set
  kubernetes:
    clusterName: "<clustername>"
    openshift: true
splunk-kubernetes-metrics:
  enabled: true
  splunk:
    hec:
      host: <splunk-ip>
      port: <splunk-hec-port>
      protocol: <hec-protocol>
      token: <hec-token>
      insecureSSL: true
      indexName: <metrics-indexname>
  kubernetes:
    openshift: true
splunk-kubernetes-logging:
  enabled: true
  logLevel: debug
  splunk:
    hec:
      host: <splunk-ip>
      port: <splunk-hec-port>
      protocol: <hec-protocol>
      token: <hec-token>
      insecureSSL: true
      indexName: <logging-indexname>
  containers:
    logFormatType: cri
  logs:
    kube-audit:
      from:
        file:
          path: /var/log/kube-apiserver/audit.log
splunk-kubernetes-objects:
  enabled: true
  kubernetes:
    openshift: true
  splunk:
    hec:
      host: <splunk-ip>
      port: <splunk-hec-port>
      protocol: <hec-protocol>
      token: <hec-token>
      insecureSSL: true
      indexName:  <objects-indexname>

Fill in the values ​​accordingly and start the deployment.get URL of the latest version Before installation.

helm install splunk-kubernetes-logging -f ocp-splunk-hec-values.yaml https://github.com/splunk/splunk-connect-for-kubernetes/releases/download/1.4.3/splunk-connect-for-kubernetes-1.4.3.tgz

Deployment output:

NAME: splunk-kubernetes-logging
LAST DEPLOYED: Thu Oct 22 22:22:51 2020
NAMESPACE: splunk-logging
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
███████╗██████╗ ██╗     ██╗   ██╗███╗   ██╗██╗  ██╗██╗
██╔════╝██╔══██╗██║     ██║   ██║████╗  ██║██║ ██╔╝╚██╗
███████╗██████╔╝██║     ██║   ██║██╔██╗ ██║█████╔╝  ╚██╗
╚════██║██╔═══╝ ██║     ██║   ██║██║╚██╗██║██╔═██╗  ██╔╝
███████║██║     ███████╗╚██████╔╝██║ ╚████║██║  ██╗██╔╝
╚══════╝╚═╝     ╚══════╝ ╚═════╝ ╚═╝  ╚═══╝╚═╝  ╚═╝╚═╝

Listen to your data.

Splunk Connect for Kubernetes is spinning up in your cluster.
After a few minutes, you should see data being indexed in your Splunk.

If you get stuck, we're here to help.
Look for answers here: http://docs.splunk.com

Check the running node:

$ oc get pods
NAME                                                              READY   STATUS    RESTARTS   AGE
splunk-kubernetes-logging-splunk-kubernetes-metrics-4bvkp         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-4skrm         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-55f8t         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-7xj2n         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-8r2vj         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-agg-5bppqqn   1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-f8psk         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-fp88w         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-s45wx         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-metrics-xtq5g         1/1     Running   0          48s
splunk-kubernetes-logging-splunk-kubernetes-objects-b4f8f4m67vg   1/1     Running   0          48s

Assign privileged SCC to service account

for sa in $(oc  get sa --no-headers  | grep splunk | awk '{ print $1 }'); do
  oc adm policy add-scc-to-user privileged -z $sa
done

Log in to Splunk and check if logs, events and metrics are being sent.How to send OpenShift logs and events to Splunk

This may not be the way Red Hat recommends storing OpenShift events and logs. For more detailed information about cluster logs, please refer to the OpenShift documentation.

More articles about OpenShift:

Grant users access to projects/namespaces in OpenShift

Configure Chrony NTP service on OpenShift 4.x / OKD 4.x

How to install Istio Service Mesh on OpenShift 4.x

You can download this article in PDF format via the link below to support us.
Download the guide in PDF formatturn off

Sidebar