Install Kubernetes Metrics Server on Amazon EKS cluster
To
You can download this article in PDF format via the link below to support us.
Download the guide in PDF format
turn off
To
To
To
In our previous guide, we discussed how to install Kubernetes Cluster on AWS using Amazon EKS service. In fact, this is the fastest and easiest way to run a Kubernetes cluster in AWS within a few minutes. The setup process is quite automated eksctl It uses the CloudFormation stack in the background to bootstrap a work cluster driven by Amazon Linux worker machines.
In this tutorial, I will guide you through the steps of installing and configuring a Kubernetes indicator server in an EKS cluster. Metrics Server is a scalable and efficient source of container resource indicators for Kubernetes’ built-in auto-scaling pipeline. It collects resource metrics from Kubelet and exposes them through Kubernetes apiserver Indicator API For Horizontal pod auto scaler with Vertical Pod automatic scaler.
Metrics Server provides:
- Single deployment for most clusters
- Scalable to support up to 5,000 node clusters
- Resource efficiency: Metrics Server uses 0.5m core CPU and 4 MB memory per node
Install Kubernetes Metrics Server on Amazon EKS cluster
Before starting to install Kubernetes Metrics Server on Amazon EKS Cluster, please confirm that you have an EKS cluster running. you can use it eksctl Command to check available EKS clusters.
$ eksctl get cluster
NAME REGION
prod-eks-cluster eu-west-1
If kubeconfig is available locally, use it to confirm whether the Kubernetes API server is responding.
$ kubectl --kubeconfig=/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-138-244.eu-west-1.compute.internal Ready 13h v1.17.9-eks-4c6976
ip-192-168-176-247.eu-west-1.compute.internal Ready 13h v1.17.9-eks-4c6976
Indicator server requirements
Metrics Server has specific requirements for cluster and network configuration. These requirements are not the default requirements for all cluster distributions. Before using Metrics Server, please ensure that your cluster distribution supports these requirements:
- The indicator server must be Reachable from kube-apiserver
- Kube-apiserver must be correctly configured as Enable aggregation layer
- Node must have kubelet authorization Configured to match the Metrics Server configuration
- Must be implemented when the container is running Container indicator RPC
How to install Kubernetes Metrics Server on Amazon EKS cluster
Save your kubeconfig to environment variables.
export KUBECONFIG=/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster
Confirm that you can run the kubectl command without manually passing the path to the kubeconfig file.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-138-244.eu-west-1.compute.internal Ready 13h v1.17.9-eks-4c6976
ip-192-168-176-247.eu-west-1.compute.internal Ready 13h v1.17.9-eks-4c6976
List of Metrics Servers available in the following locations Metrics Server version Make them installable via url:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
This is the output of the created resource.
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
Use the following command to verify Indicator server Deploy as many pods as needed:
$ kubectl get deployment metrics-server -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 1/1 1 1 7m23s
$ kubectl get pods -n kube-system | grep metrics
metrics-server-7cb45bbfd5-kbrt7 1/1 Running 0 8m42s
Confirm that the indicator server is active.
$ kubectl get apiservice v1beta1.metrics.k8s.io -o yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apiregistration.k8s.io/v1beta1","kind":"APIService","metadata":{"annotations":{},"name":"v1beta1.metrics.k8s.io"},"spec":{"group":"metrics.k8s.io","groupPriorityMinimum":100,"insecureSkipTLSVerify":true,"service":{"name":"metrics-server","namespace":"kube-system"},"version":"v1beta1","versionPriority":100}}
creationTimestamp: "2020-08-12T11:27:13Z"
name: v1beta1.metrics.k8s.io
resourceVersion: "130943"
selfLink: /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.metrics.k8s.io
uid: 83c44e41-6346-4dff-8ce2-aff665199209
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
port: 443
version: v1beta1
versionPriority: 100
status:
conditions:
- lastTransitionTime: "2020-08-12T11:27:18Z"
message: all checks passed
reason: Passed
status: "True"
type: Available
You can also use the kubectl top command to access the Metrics API. This makes it easier to debug the auto-scaling pipeline.
$ kubectl top --help
Display Resource (CPU/Memory/Storage) usage.
The top command allows you to see the resource consumption for nodes or pods.
This command requires Metrics Server to be correctly configured and working on the server.
Available Commands:
node Display Resource (CPU/Memory/Storage) usage of nodes
pod Display Resource (CPU/Memory/Storage) usage of pods
Usage:
kubectl top [flags] [options]
Use "kubectl --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
To display the resource usage of the cluster nodes-CPU/memory/storage, you will run the following command:
$ kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
ip-192-168-138-244.eu-west-1.compute.internal 50m 2% 445Mi 13%
ip-192-168-176-247.eu-west-1.compute.internal 58m 3% 451Mi 13%
Similar commands can be used for pods.
$ kubectl top pods -A
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system aws-node-glfrs 4m 51Mi
kube-system aws-node-sgh8p 5m 51Mi
kube-system coredns-6987776bbd-2mgxp 2m 6Mi
kube-system coredns-6987776bbd-vdn8j 2m 6Mi
kube-system kube-proxy-5glzs 1m 7Mi
kube-system kube-proxy-hgqm5 1m 8Mi
kube-system metrics-server-7cb45bbfd5-kbrt7 1m 11Mi
You can also visit use kubectl get -raw to get the raw resource usage indicators of all nodes in the cluster.
$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" | jq
{
"kind": "NodeMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
},
"items": [
{
"metadata": {
"name": "ip-192-168-176-247.eu-west-1.compute.internal",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/ip-192-168-176-247.eu-west-1.compute.internal",
"creationTimestamp": "2020-08-12T11:44:41Z"
},
"timestamp": "2020-08-12T11:44:17Z",
"window": "30s",
"usage": {
"cpu": "55646953n",
"memory": "461980Ki"
}
},
{
"metadata": {
"name": "ip-192-168-138-244.eu-west-1.compute.internal",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/ip-192-168-138-244.eu-west-1.compute.internal",
"creationTimestamp": "2020-08-12T11:44:41Z"
},
"timestamp": "2020-08-12T11:44:09Z",
"window": "30s",
"usage": {
"cpu": "47815890n",
"memory": "454944Ki"
}
}
]
}
In the next article, we will see how to configure Horizontal Pod Autoscaler (HPA) in an EKS Kubernetes cluster. In the meantime, please check out other Kubernetes-related articles on our website.
How to force delete the Kubernetes namespace
Use kubeadm to install Kubernetes cluster on Ubuntu 20.04
How to install the Kubernetes dashboard using NodePort
To
You can download this article in PDF format via the link below to support us.
Download the guide in PDF format
turn off
To
To
To