Upgrade Kubernetes cluster on OpenStack Magnum

How to upgrade a Kubernetes cluster powered by OpenStack Magnum? For managed Kubernetes services (such as services on the Magnum orchestration engine), rolling upgrades are an important feature that users may need. The main dependency of this article is a working Kubernetes cluster deployed on Openstack using Magnum.

Please note that Kubernetes version upgrade only supports Fedora Atomic sum Fedora CoreOS driver. This is because the design around the basic operating system can withstand failures caused by automatic updates.My cluster uses Fedora CoreOS as the basic operating system:

$ cat /etc/os-release
NAME=Fedora
VERSION="34.20210427.3.0 (CoreOS)"
ID=fedora
VERSION_ID=34
VERSION_CODENAME=""
PLATFORM_ID="platform:f34"
PRETTY_NAME="Fedora CoreOS 34.20210427.3.0"
ANSI_COLOR="0;38;2;60;110;180"
LOGO=fedora-logo-icon
CPE_NAME="cpe:/o:fedoraproject:fedora:34"
HOME_URL="https://getfedora.org/coreos/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora-coreos/"
SUPPORT_URL="https://github.com/coreos/fedora-coreos-tracker/"
BUG_REPORT_URL="https://github.com/coreos/fedora-coreos-tracker/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
....

this kube_tag label allows users to select a specific Kubernetes version based on their container label Fedora CoreOS image. If this label is not set, the default Kubernetes version of the current Magnum version will be used during cluster configuration. You can view the Magnum and Kubernetes version compatibility matrix.

Step 1: Determine the current Kubernetes version

Get the current Kubernetes version:

$ kubectl version --short
Server Version: v1.18.2

We will follow the steps below to upgrade our cluster:

  1. Create a new Magnum Kubernetes cluster template-this should be similar to the previous template, except kube_tag label refers to a newer version of Kubernetes.
  2. Start cluster rolling upgrade

Step 2: Use the upgraded version to create a new cluster template

Check the Magnum and Kubernetes version compatibility matrix to get a clear understanding of the supported versions in the OpenStack Magnum installation.Since my settings are based on Victoria, I should be able to upgrade from the version 1.18.2 arrive 1.18.9.

If you need more detailed information about the minor version, the Github release page contains all Kubernetes release notes.

My cluster is deployed from the following template:

# Cluster Template Creation
openstack coe cluster template create k8s-cluster-template-v1.18.2 
   --image Fedora-CoreOS-34 
   --keypair admin 
   --external-network public 
   --fixed-network private 
   --fixed-subnet private_subnet 
   --dns-nameserver 8.8.8.8 
   --flavor m1.medium 
   --master-flavor m1.medium 
   --volume-driver cinder 
   --docker-volume-size 5 
   --network-driver calico 
   --docker-storage-driver overlay2 
   --coe kubernetes 
   --labels kube_tag=v1.18.2

# Initial Cluster Creation
openstack coe cluster create k8s-cluster-02 
    --cluster-template k8s-cluster-template-v1.18.2 
    --master-count 1 
    --node-count 1

If there is an error”Forbidden: PodSecurityPolicy: Pod cannot be accepted: []“Some pods that start after the cluster is created consider adding the following tags:

--labels admission_control_list="NodeRestriction,NamespaceLifecycle,Limi

Before proceeding with the upgrade, confirm that the created cluster is complete and in a healthy state:

$ openstack coe cluster list  -f json
[
  {
    "uuid": "48eb36b9-7f8b-4442-8637-bebcf078ca8b",
    "name": "k8s-cluster-01",
    "keypair": "admin",
    "node_count": 2,
    "master_count": 1,
    "status": "CREATE_COMPLETE",
    "health_status": "HEALTHY"
  },
  {
    "uuid": "e5ebf8aa-38f0-4082-a665-5bdb4f4769f9",
    "name": "k8s-cluster-02",
    "keypair": "admin",
    "node_count": 1,
    "master_count": 1,
    "status": "CREATE_COMPLETE",
    "health_status": "HEALTHY"
  }
]

Create a new cluster template with an updated version. Mine is as follows:

openstack coe cluster template create k8s-cluster-template-v1.18.9 
   --image Fedora-CoreOS-34 
   --keypair admin 
   --external-network public 
   --fixed-network private 
   --fixed-subnet private_subnet 
   --dns-nameserver 8.8.8.8 
   --flavor m1.medium 
   --master-flavor m1.medium 
   --volume-driver cinder 
   --docker-volume-size 5 
   --network-driver calico 
   --docker-storage-driver overlay2 
   --coe kubernetes 
   --labels kube_tag=v1.18.9

Confirm that the creation is successful:

$ openstack coe cluster template list -f json
openstack coe cluster template list -f json
[
  {
    "uuid": "b05dcb03-07a7-4b66-beee-42383ff16e9b",
    "name": "k8s-cluster-template"
  },
  {
    "uuid": "77cc9112-b7ba-4531-9be5-6923528cd0eb",
    "name": "k8s-cluster-template-v1.18.2"
  },
  {
    "uuid": "cc33f457-866a-440f-ac78-6c3be713ef73",
    "name": "k8s-cluster-template-v1.18.9"
  }
]

Main instructions:

  • The highest version you can upgrade to in OpenStack Victoria and below is 1.18.9. This is because the official Hyperkube mirror has stopped using kube_tag greater than 1.18.x. Cannot pass tags that allow users to specify custom prefixes for Hyperkube container sources
  • If you are running OpenStack Wallaby, you can add hyperkube_prefix The label specifies a custom prefix for the Hyperkube container source
#docker.io/rancher/
#docker.io/kubesphere/hyperkube
#Example:
--labels kube_tag=v1.21.1,hyperkube_prefix=docker.io/rancher/

#Checking available tags
sudo podman image search docker.io/rancher/hyperkube --list-tags --limit 1000

You can also pull, tag and upload to your own registry or docker.io

#Examples
## Search available tags for particular release
podman image search docker.io/rancher/hyperkube --list-tags --limit 1000 | grep 1.21

# Pull
podman pull docker.io/rancher/hyperkube:v1.21.1-rancher1

# Login to docker.io
$ podman login docker.io
Username: jmutai
Password:
Login Succeeded!

# Tag image
$ podman tag docker.io/rancher/hyperkube:v1.21.1-rancher1 docker.io/jmutai/hyperkube:v1.21.1

# Push image to registry
$ podman push docker.io/jmutai/hyperkube:v1.21.1

# I can the use the labels below in the template
--labels kube_tag=v1.21.1,hyperkube_prefix=docker.io/jmutai/

Step 3: Upgrade your Kubernetes cluster with the new template

Run the following command to trigger a rolling upgrade of the Kubernetes version:

$ openstack coe cluster upgrade <cluster ID> <new cluster template ID>

example:

$ openstack coe cluster upgrade k8s-cluster-02 k8s-cluster-template-v1.18.9
Request to upgrade cluster k8s-cluster-01 has been accepted.

The status should show that the update is in progress.

$ openstack coe cluster list --column name  --column status --column health_status
+----------------+--------------------+---------------+
| name           | status             | health_status |
+----------------+--------------------+---------------+
| k8s-cluster-02 | UPDATE_IN_PROGRESS | UNHEALTHY     |
+----------------+--------------------+---------------+

Check the cluster status after the upgrade is complete:

$ openstack coe cluster list --column name  --column status --column health_status
+----------------+-----------------+---------------+
| name           | status          | health_status |
+----------------+-----------------+---------------+
| k8s-cluster-01 | CREATE_COMPLETE | HEALTHY       |
| k8s-cluster-02 | UPDATE_COMPLETE | HEALTHY       |
+----------------+-----------------+---------------+

Let’s download kubeconfig and confirm the status:

$ mkdir k8s-cluster-02 
$ openstack coe cluster config --dir ./k8s-cluster-02 k8s-cluster-02 --force

Check the Kubernetes version:

$ export KUBECONFIG=./k8s-cluster-02/config
$ kubectl version --short
Client Version: v1.21.1
Server Version: v1.18.9

Example of upgrading to version 1.21.1:

$ kubectl version --short
Client Version: v1.21.1
Server Version: v1.21.1

$ kubectl get nodes
NAME                                   STATUS   ROLES    AGE   VERSION
k8s-cluster-02-soooe6tdv773-master-0   Ready    master   10h   v1.21.1
k8s-cluster-02-soooe6tdv773-node-0     Ready    <none>   10h   v1.21.1
k8s-cluster-02-soooe6tdv773-node-1     Ready    <none>   10h   v1.21.1
k8s-cluster-02-soooe6tdv773-node-2     Ready    <none>   10h   v1.21.1

This confirmed the successful upgrade of the Kubernetes cluster on OpenStack Magnum.

refer to:

  • Magnum rolling upgrade user guide.