How to restart pods in Kubernetes

Sometimes you can get into a situation where you need to restart the Pod. For example, if your module is in an error state.

Depending on the restart policy, Kubernetes itself tries to restart and fix it.

But if that doesn’t work and you can’t find the source of the error, restarting the Kubernetes Pod manually is the fastest way to get your application up and running again.

How to restart pods in Kubernetes

Unfortunately there is no kubectl restart pod command for this purpose. Here are some ways to restart pods:

  1. The deployment module restarts
  2. Scaling the number of replicas

Let us show you both methods in detail.

Method 1: Restarting the deployment module

Starting with Kubernetes 1.15, you can continuously restart deployments.

The controller kills one module at a time and relies on the ReplicaSet to scale new modules until all modules are newer than the restart time. In our opinion, this is the best way to restart your pods as your application will not crash.

Note The IP addresses of individual modules will be changed.

Let’s take an example. You have a deployment named my-dep that has two modules (since the replica is set to two).

[email protected]:~# kubectl get deployments
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
my-dep   2/2     2            2           13s

Let’s find out the details of the pod:

[email protected]:~# kubectl get pod -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP               NODE          NOMINATED NODE   READINESS GATES
my-dep-6d9f78d6c4-8j5fq   1/1     Running   0          47s   172.16.213.255   kworker-rj2   <none>           <none>
my-dep-6d9f78d6c4-rkhrz   1/1     Running   0          47s   172.16.213.35    kworker-rj1   <none>           <none>

Now, let’s deploy a restart to deploy my-dep with a command like this:

kubectl rollout restart deployment name_of_deployment

Do you remember the deployment name from the previous commands? Use this here:

[email protected]:~# kubectl rollout restart deployment my-dep
deployment.apps/my-dep restarted

You can watch the process of shutting down old modules and creating new ones with the kubectl get pod -w command:

[email protected]:~# kubectl get pod -w
NAME                      READY   STATUS              RESTARTS   AGE
my-dep-557548758d-kz6r7   1/1     Running             0          5s
my-dep-557548758d-svg7w   0/1     ContainerCreating   0          1s
my-dep-6d9f78d6c4-8j5fq   1/1     Running             0          69s
my-dep-6d9f78d6c4-rkhrz   1/1     Terminating         0          69s
my-dep-6d9f78d6c4-rkhrz   0/1     Terminating         0          69s
my-dep-557548758d-svg7w   0/1     ContainerCreating   0          1s
my-dep-557548758d-svg7w   1/1     Running             0          3s
my-dep-6d9f78d6c4-8j5fq   1/1     Terminating         0          71s
my-dep-6d9f78d6c4-8j5fq   0/1     Terminating         0          72s
my-dep-6d9f78d6c4-rkhrz   0/1     Terminating         0          74s
my-dep-6d9f78d6c4-rkhrz   0/1     Terminating         0          74s
my-dep-6d9f78d6c4-8j5fq   0/1     Terminating         0          76s
my-dep-6d9f78d6c4-8j5fq   0/1     Terminating         0          76s

If you check the modules now, you will see that the details have changed here:

[email protected]:~# kubectl get pod -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP               NODE          NOMINATED NODE   READINESS GATES
my-dep-557548758d-kz6r7   1/1     Running   0          42s   172.16.213.43    kworker-rj1   <none>           <none>
my-dep-557548758d-svg7w   1/1     Running   0          38s   172.16.213.251   kworker-rj2   <none>           <none>

Method 2. Scaling the number of replicas.

In a CI / CD environment, the process of reloading modules when an error occurs can take a long time as it must go through the entire build process again.

A faster way to accomplish this is to use the kubectl scale command to change the replica number to zero, and as soon as you set the number to greater than zero, Kubernetes will create new replicas.

Let’s try. Check your pods first:

[email protected]:~# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-557548758d-kz6r7   1/1     Running   0          11m
my-dep-557548758d-svg7w   1/1     Running   0          11m

Get deployment information:

[email protected]:~# kubectl get deployments
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
my-dep   2/2     2            2           12m

Now set the replica number to zero:

[email protected]:~# kubectl scale deployment --replicas=0 my-dep
deployment.apps/my-dep scaled

And then set it back to two:

[email protected]:~# kubectscale deployment --replicas=2 my-dep
deployment.apps/my-dep scaled

Check out the pod now:

[email protected]:~# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
my-dep-557548758d-d2pmd   1/1     Running   0          10s
my-dep-557548758d-gprnr   1/1     Running   0          10s

You have successfully restarted the Kubernetes modules.

Use any of the above methods to quickly and securely get your application to work without affecting end users.

After completing this exercise, make sure you find the root problem and fix it, as restarting the module will not fix the underlying problem.

I hope you enjoy this Kubernetes tip. Don’t forget to subscribe for more.

Sidebar