Use Active Directory to authenticate Kubernetes dashboard users
You can download this article in PDF format via the link below to support us.Download the guide in PDF formatClose
The Kubernetes dashboard is a web-based user interface that allows users to easily interact with the kubernetes cluster. It allows users to manage, monitor and troubleshoot applications and clusters. In this tutorial, we have studied how to deploy the dashboard. In this guide, we will explore the integration of kubernetes dashboard with Active Directory to simplify user and password management.
Kubernetes supports two types of users:
- Service account: This is the default method supported by kubernetes. One uses the service account token to access the dashboard.
- general user: any of others Authentication method Configure in the cluster.
For this, we will use a program called agile.Agile is OpenID connection Provider by core. It is responsible for the conversion between Kubernetes tokens and Active Directory users.
Setup requirements:
- You need the IP of the Active Directory server on your network.In my case, the IP will be 172.16.16.16
- You will also need an available Kubernetes cluster. The nodes of the cluster should be able to communicate with Active Directory IP. Take a look at how to use kubeadm or rke to create a kubernetes cluster (if you don’t already have one).
- You will also need a domain name that supports wildcard DNS entries. I will use wildcard DNS “*.Kubernetes.mydomain.com” Route external traffic to my Kubernetes cluster
Step 1: Deploy Dex on the Kubernetes cluster
We first need to create a namespace and create a service account for dex. Then, before deploying the dex service account, we will configure RBAC rules for it. This is to ensure that the application has the appropriate permissions.
- Create a dex-namespace.yaml file.
$ vim dex-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: auth-system
2. Create a namespace for Dex.
$ kubectl apply -f dex-namespace.yaml
3. Create a dex-rbac.yaml file.
$ vim dex-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: dex
namespace: auth-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: dex
namespace: auth-system
rules:
- apiGroups: ["dex.coreos.com"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dex
namespace: auth-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: dex
subjects:
- kind: ServiceAccount
name: dex
namespace: auth-system
4. Create Dex permissions.
$ kubectl apply -f dex-rbac.yaml
5. Create a dex-configmap.yaml file. Make sure to modify the issuer URL, redirect URI, client password and Active Directory configuration accordingly.
$ vim dex-configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: dex
namespace: auth-system
data:
config.yaml: |
issuer: https://auth.kubernetes.mydomain.com/
web:
http: 0.0.0.0:5556
frontend:
theme: custom
telemetry:
http: 0.0.0.0:5558
staticClients:
- id: oidc-auth-client
redirectURIs:
- https://kubectl.kubernetes.mydomain.com/callback
- http://dashtest.kubernetes.mydomain.com/oauth2/callback
name: oidc-auth-client
secret: secret
connectors:
- type: ldap
id: ldap
name: LDAP
config:
host: 172.16.16.16:389
insecureNoSSL: true
insecureSkipVerify: true
bindDN: ldapadmin
bindPW: 'KJZOBwS9DtB'
userSearch:
baseDN: OU=computingforgeeks departments,DC=computingforgeeks ,DC=net
username: sAMAccountName
idAttr: sn
nameAttr: givenName
emailAttr: mail
groupSearch:
baseDN: CN=groups,OU=computingforgeeks,DC=computingforgeeks,DC=net
userMatchers:
- userAttr: sAMAccountName
groupAttr: memberOf
nameAttr: givenName
oauth2:
skipApprovalScreen: true
storage:
type: kubernetes
config:
inCluster: true
6. Agile configuration.
$ kubectl apply -f dex-configmap.yaml
7. Create the dex-deployment.yaml file.
$ vim dex-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: dex
name: dex
namespace: auth-system
spec:
replicas: 1
selector:
matchLabels:
app: dex
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: dex
revision: "1"
spec:
containers:
- command:
- /usr/local/bin/dex
- serve
- /etc/dex/cfg/config.yaml
image: quay.io/dexidp/dex:v2.17.0
imagePullPolicy: IfNotPresent
name: dex
ports:
- containerPort: 5556
name: http
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/dex/cfg
name: config
- mountPath: /web/themes/custom/
name: theme
dnsPolicy: ClusterFirst
serviceAccountName: dex
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
items:
- key: config.yaml
path: config.yaml
name: dex
name: config
- name: theme
emptyDir: {}
8. Agile deployment.
$ kubectl apply -f dex-deployment.yaml
9. Create a dex-service.yaml file.
$ vim dex-service.yaml
apiVersion: v1
kind: Service
metadata:
name: dex
namespace: auth-system
spec:
selector:
app: dex
ports:
- name: dex
port: 5556
protocol: TCP
targetPort: 5556
10. Create a service for Dex deployment.
$ kubectl apply -f dex-service.yaml
11. Create a dex-ingress secret. Ensure that the certificate data for the cluster is in the specified location, or change this path to point to it.If you have one Certificate manager Installed in your cluster, you can skip this step.
$ kubectl create secret tls dex --key /data/Certs/ kubernetes.mydomain.com.key --cert /data/Certs/ kubernetes.mydomain.com.crt -n auth-system
12. Create a dex-ingress.yaml file. Change the host parameters and certificate issuer name accordingly.
$ vim dex-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dex
namespace: auth-system
annotations:
kubernetes.io/tls-acme: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- secretName: dex
hosts:
- auth.kubernetesuat.mydomain.com
rules:
- host: auth.kubernetes.mydomain.com
http:
paths:
- backend:
serviceName: dex
servicePort: 5556
13. Create an entry for the Dex service.
$ kubectl apply -f dex-ingress.yaml
Wait a few minutes until the certificate administrator generates a certificate for Dex. You can check whether Dex has been deployed correctly by browsing to the following location: https://auth.kubernetesuat.mydomain.com/.well-known/openid-configuration
Step 2: Configure the Kubernetes API to access Dex as the OpenID Connect provider
Next, we will study how to configure API servers for RKE and Kubeadm clusters. To enable the OIDC plugin, we need to configure several flags on the API server, as shown below:
One kind. RKE group
1. SSH to your rke node.
$ ssh [email protected]
2. Edit the Kubernetes API configuration. Add OIDC parameters and modify the issuer URL accordingly.
$ sudo vim ~/Rancher/cluster.yml
kube-api:
service_cluster_ip_range: 10.43.0.0/16
# Expose a different port range for NodePort services
service_node_port_range: 30000-32767
extra_args:
# Enable audit log to stdout
audit-log-path: "-"
# Increase number of delete workers
delete-collection-workers: 3
# Set the level of log output to debug-level
v: 4
#ADD THE FOLLOWING LINES
oidc-issuer-url: https://auth.kubernetes.mydomain.com/
oidc-client-id: oidc-auth-client
oidc-ca-file: /data/Certs/kubernetes.mydomain.com.crt
oidc-username-claim: email
oidc-groups-claim: groups
extra_binds:
- /data/Certs:/data/Certs ##ENSURE THE WILDCARD CERTIFICATES ARE PRESENT IN THIS FILE PATH IN ALL MASTER NODES
3. After running RKE UP, the Kubernetes API will restart itself.
$ rke up
B. KUBEADM cluster
1. SSH to your node.
$ ssh [email protected]
2. Edit the Kubernetes API configuration. Add OIDC parameters and modify the issuer URL accordingly.
$ sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
...
command:
- /hyperkube
- apiserver
- --advertise-address=10.10.40.30
#ADD THE FOLLOWING LINES:
...
- --oidc-issuer-url=https://auth.kubernetes.mydomain.com/
- --oidc-client-id=oidc-auth-client
##ENSURE THE WILDCARD CERTIFICATES ARE PRESENT IN THIS FILE PATH IN ALL MASTER NODES:
- --oidc-ca-file=/etc/ssl/kubernetes/kubernetes.mydomain.com.crt
- --oidc-username-claim=email
- --oidc-groups-claim=groups
...
3. The Kubernetes API will restart itself.
Step 3: Deploy Oauth2 proxy and configure kubernetes dashboard entry
1. Generate a secret for the Oauth2 proxy.
python -c 'import os,base64; print base64.urlsafe_b64encode(os.urandom(16))'
2. Copy the generated password and use it for the OAUTH2_PROXY_COOKIE_SECRET value in the next step.
3. Create an oauth2-proxy-deployment.yaml file. Modify the OIDC client key, OIDC issuer URL, and Oauth2 proxy cookie key accordingly.
$ vim oauth2-proxy-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: auth-system
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- --cookie-secure=false
- --provider=oidc
- --client-id=oidc-auth-client
- --client-secret=***********
- --oidc-issuer-url=https://auth.kubernetes.mydomain.com/
- --http-address=0.0.0.0:8080
- --upstream=file:///dev/null
- --email-domain=*
- --set-authorization-header=true
env:
# docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))));'
- name: OAUTH2_PROXY_COOKIE_SECRET
value: ***********
image: sguyennet/oauth2-proxy:header-2.2
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 8080
protocol: TCP
4. Deploy Oauth2 proxy.
$ kubectl apply -f oauth2-proxy-deployment.yaml
5. Create an oauth2-proxy-service.yaml file.
$ vim oauth2-proxy-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: auth-system
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
k8s-app: oauth2-proxy
6. Create a service for Oauth2 agent deployment.
$ kubectl apply -f oauth2-proxy-service.yaml
7. Create a dashboard-ingress.yaml file. Modify the dashboard URL and host parameters accordingly.
$ vim dashboard-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubernetes-dashboard
namespace: kube-system
annotations:
nginx.ingress.kubernetes.io/auth-url: "https://dashboard.kubernetes.mydomain.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://dashboard.kubernetes.mydomain.com/oauth2/start?rd=https://$host$request_uri$is_args$args"
nginx.ingress.kubernetes.io/secure-backends: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
auth_request_set $token $upstream_http_authorization;
proxy_set_header Authorization $token;
spec:
rules:
- host: dashboard.kubernetes.mydomain.com
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 443
path: /
8. Create an entry for the dashboard service.
$ kubectl apply -f dashboard-ingress.yaml
9. Create a kubernetes-dashboard-external-tls entry password. Ensure that the certificate data of the cluster is in the specified location, or change this path to point to it. If you are using a certificate manager, skip this step.
$ kubectl create secret tls kubernetes-dashboard-external-tls --key /data/Certs/ kubernetes.mydomain.com.key --cert /data/Certs/ kubernetes.mydomain.com.crt -n auth-system
10. Create an oauth2-proxy-ingress.yaml file. Modify the certificate manager issuer and host parameters accordingly.
$ vim oauth2-proxy-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/tls-acme: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
name: oauth-proxy
namespace: auth-system
spec:
rules:
- host: dashboard.kubernetes.mydomain.com
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 8080
path: /oauth2
tls:
- hosts:
- dashboard.kubernetes.mydomain.com
secretName: kubernetes-dashboard-external-tls
10. Create an entry for the Oauth2 proxy service.
$ kubectl apply -f oauth2-proxy-ingress.yaml
11. Create a role binding.
$ kubectl create rolebinding <username>-rolebinding-<namespace> --clusterrole=admin --user=<username> -n <namespace>
e.g
kubectl create rolebinding mkemei-rolebinding-default --clusterrole=admin [email protected] -n default
// Note that usernames are case sensitive and we need to confirm the correct format before applying the rolebinding.
12. Wait a few minutes, and then browse to https://dashboard.kubernetes.mydomain.com.
13. Log in with your Active Directory user.
As follows: [email protected] You should be able to view and modify the default namespace.
Check out more articles about Kubernetes:
Use Kubernetes Operational View to monitor Kubernetes deployment
How to send Kubernetes logs to external Elasticsearch
How to perform Git clone in Kubernetes Pod deployment
You can download this article in PDF format via the link below to support us.Download the guide in PDF formatClose