Installing a production Kubernetes cluster with Rancher RKE

How do I use RKE to deploy a Kubernetes cluster that can be used for production? Kubernetes has gained great appeal and has now become the standard orchestration layer for containerized workloads. If you want an open source system to automate the deployment of containerized applications without worrying about scaling and management, Kubernetes is the tool for you.

There are many standard methods for deploying production-grade Kubernetes clusters. This includes using such as ps, kubespray Or manually set up the cluster Cupeam. We have some guides for your reference.

Deploy a Kubernetes cluster ready with Ansible and Kubespray

Kubernetes cluster deployment on CentOS 7 / CentOS 8 using Ansible and Calico CNI

This guide will guide you through the simple steps of installing a production-grade Kubernetes cluster using RKE. We will set up a 5-node cluster using Rancher Kubernetes Engine (RKE) and install Rancher charts using Helm package manager.

What is RKE?

Rancher Kubernetes Engine (RKE) is an extremely simple, lightning-fast Kubernetes distribution that runs entirely inside a container. Rancher is a container management platform built for organizations that deploy containers in production. Rancher makes it easy to run Kubernetes anywhere, meet IT requirements, and empower DevOps teams.

Prepare the workstation

On the workstation where the deployment is complete, many CLI tools are required. This can also be a virtual machine that has access to the cluster nodes.

  1. kubectl:
--- Linux ---
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
kubectl version --client

--- macOS ---
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
kubectl version --client

2. ke

--- Linux ---
curl -s https://api.github.com/repos/rancher/rke/releases/latest | grep download_url | grep amd64 | cut -d '"' -f 4 | wget -qi -
chmod +x rke_linux-amd64
sudo mv rke_linux-amd64 /usr/local/bin/rke
rke --version

--- macOS ---
curl -s https://api.github.com/repos/rancher/rke/releases/latest | grep download_url | grep darwin-amd64 | cut -d '"' -f 4 | wget -qi -
chmod +x rke_darwin-amd64
sudo mv rke_darwin-amd64 /usr/local/bin/rke
rke --version

3. Rudder

--- Helm 3 ---
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

Install Kubernetes with RKE

I will work on 5 nodes:

  • 3 master nodes – Etcd and control plane (3 for HA)
  • 2 Work nodes – scale to meet your workload

These are the specifications of my setup.

  • Master node – 8GB RAM and 4 vcpus
  • Worker machine – 16 GB RAM and 8 vpcus

RKE supported operating systems

RKE runs on almost any Linux operating system with Docker installed. Rancher has been tested and is supported by:

  • Red Hat Enterprise Linux
  • Oracle Enterprise Linux
  • CentOS Linux
  • Ubuntu
  • RancherOS

Step 1: Update your Linux system

The first step is to update the Linux computer that will be used to build the cluster.

--- CentOS ---
$ sudo yum -y update
$ sudo reboot

--- Ubuntu / Debian ---
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo reboot

Step 2: Create rke user

Not available if using Red Hat Enterprise Linux, Oracle Enterprise Linux or CentOS root user ID SSH user Due to Bugzilla 1527565. So we will create a file called ke Used for deployment purposes.

Using Ansible Playbook:

---
- name: Create rke user with passwordless sudo
  hosts: rke-hosts
  remote_user: root
  tasks:
    - name: Add RKE admin user
      user:
        name: rke
        shell: /bin/bash
     
    - name: Create sudo file
      file:
        path: /etc/sudoers.d/rke
        state: touch
    
    - name: Give rke user passwordless sudo
      lineinfile:
        path: /etc/sudoers.d/rke
        state: present
        line: 'rke ALL=(ALL:ALL) NOPASSWD: ALL'
     
    - name: Set authorized key taken from file
      authorized_key:
        user: rke
        state: present
        key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"

Manually create users on all hosts

Log in to each cluster node and create an rke user.

sudo useradd rke
sudo passwd rke

Enable passwordless sudo for the user:

$ sudo vim /etc/sudoers.d/rke
rke  ALL=(ALL:ALL) NOPASSWD: ALL

Copy your ssh public key to the user’s ~ / .Ssh / authorized_keys file.

for i in rke-master-01 rke-master-02 rke-master-03 rke-worker-01 rke-worker-02; do
  ssh-copy-id [email protected]$i
done

Confirm that you can log in from the workstation:

$ ssh [email protected]
Warning: Permanently added 'rke-master-01,x.x.x.x' (ECDSA) to the list of known hosts.
[[email protected] ~]$ sudo su - # No password prompt
Last login: Mon Jan 27 21:28:53 CET 2020 from y.y.y.y on pts/0
[[email protected] ~]# exit
[[email protected] ~]$ exit
logout
Connection to rke-master-01 closed.

Step 3: Enable the required kernel modules:

Use Ansible:

Create a playbook with the following content and run it against your RKE server manifest.

---
- name: Load RKE kernel modules
  hosts: rke-hosts
  remote_user: root
  vars:
    kernel_modules:
      - br_netfilter
      - ip6_udp_tunnel
      - ip_set
      - ip_set_hash_ip
      - ip_set_hash_net
      - iptable_filter
      - iptable_nat
      - iptable_mangle
      - iptable_raw
      - nf_conntrack_netlink
      - nf_conntrack
      - nf_conntrack_ipv4
      - nf_defrag_ipv4
      - nf_nat
      - nf_nat_ipv4
      - nf_nat_masquerade_ipv4
      - nfnetlink
      - udp_tunnel
      - veth
      - vxlan
      - x_tables
      - xt_addrtype
      - xt_conntrack
      - xt_comment
      - xt_mark
      - xt_multiport
      - xt_nat
      - xt_recent
      - xt_set
      - xt_statistic
      - xt_tcpudp

  tasks:
    - name: Load kernel modules for RKE
      modprobe:
        name: "{{ item }}"
        state: present
      with_items: "{{ kernel_modules }}"
 

Manual mode

Log in to each host and enable the kernel modules required to run Kubernetes.

for module in br_netfilter ip6_udp_tunnel ip_set ip_set_hash_ip ip_set_hash_net iptable_filter iptable_nat iptable_mangle iptable_raw nf_conntrack_netlink nf_conntrack nf_conntrack_ipv4   nf_defrag_ipv4 nf_nat nf_nat_ipv4 nf_nat_masquerade_ipv4 nfnetlink udp_tunnel veth vxlan x_tables xt_addrtype xt_conntrack xt_comment xt_mark xt_multiport xt_nat xt_recent xt_set  xt_statistic xt_tcpudp;
     do
       if ! lsmod | grep -q $module; then
         echo "module $module is not present";
       fi;

Step 4: Disable swap and modify sysctl entries

Kubernetes’ advice is to disable the swap and add some sysctl values.

With Ansible:

---
- name: Disable swap and load kernel modules
  hosts: rke-hosts
  remote_user: root
  tasks:
    - name: Disable SWAP since kubernetes can't work with swap enabled (1/2)
      shell: |
        swapoff -a
     
    - name: Disable SWAP in fstab since kubernetes can't work with swap enabled (2/2)
      replace:
        path: /etc/fstab
        regexp: '^([^#].*?sswaps+.*)$'
        replace: '# 1'
    - name: Modify sysctl entries
      sysctl:
        name: '{{ item.key }}'
        value: '{{ item.value }}'
        sysctl_set: yes
        state: present
        reload: yes
      with_items:
        - {key: net.bridge.bridge-nf-call-ip6tables, value: 1}
        - {key: net.bridge.bridge-nf-call-iptables,  value: 1}
        - {key: net.ipv4.ip_forward,  value: 1}

manually

exchange:

$ sudo vim /etc/fstab
# Add comment to swap line

$ sudo swapoff -a

Sysctl:

$ sudo tee -a /etc/sysctl.d/99-kubernetes.conf <

Confirm that it is disabled:

$ free -h
              total        used        free      shared  buff/cache   available
Mem:           7.6G        180M        6.8G        8.5M        633M        7.2G
Swap:            0B          0B          0B

Step 5: Install a supported version of Docker

Each Kubernetes version supports different Docker versions. Kubernetes release notes contain Current list Verified Docker version.

As of this writing, the supported docker versions are:

Docker version installation script
18.09.2curl https://releases.rancher.com/install-docker/18.09.2.sh | sh
18.06.2curl https://releases.rancher.com/install-docker/18.06.2.sh | sh
17.03.2curl https://releases.rancher.com/install-docker/17.03.2.sh | sh

You can follow Docker installation Describe or use one of Rancher Installation script Install Docker. I will install the latest supported version:

curl https://releases.rancher.com/install-docker/18.09.2.sh | sudo bash -

Start and enable the docker service:

sudo systemctl enable --now docker

Make sure you have a supported Docker version installed on your machine:

$ sudo docker version --format '{{.Server.Version}}'
18.09.2

Add ke User to docker group.

$ sudo usermod -aG docker rke
$ id rke
uid=1000(rke) gid=1000(rke) groups=1000(rke),994(docker)

Step 6: Open the port on the firewall

  • For a single node installation, You only need to open the ports required for Rancher to communicate with downstream user clusters.
  • For high availability installations, the same
    Need to open ports and set other ports required
    Kubernetes cluster with Rancher installed

Check in Requirements page

Firewall TCP port:

for i in 22 80 443 179 5473 6443 8472 2376 8472 2379-2380 9099 10250 10251 10252 10254 30000-32767; do
    sudo firewall-cmd --add-port=${i}/tcp --permanent
done
sudo firewall-cmd --reload

Firewall UDP port:

for i in 8285 8472 4789 30000-32767; do
   sudo firewall-cmd --add-port=${i}/udp --permanent
done

Step 6: Allow SSH TCP forwarding

You need to enable SSH server system-wide TCP forwarding.

Open the ssh configuration file located at / etc / ssh / sshd_config:

$ sudo vi /etc/ssh/sshd_config
AllowTcpForwarding yes

After making changes, restart the ssh service.

--- CentOS ---
$ sudo systemctl restart sshd

--- Ubuntu ---
$ sudo systemctl restart ssh

Step 7: Generate the RKE cluster configuration file.

RKE uses a cluster configuration file, called cluster.yml Determine which nodes will be in the cluster and how to deploy Kubernetes.

Have Many configuration options allowable cluster.yml. The file can be created from Minimal example Template or use rke configuration command.

Run the rke config command to create a new cluster.yml in the current directory.

rke config --name cluster.yml

The command prompts you for all the information needed to build the cluster.

If you want to create an empty template cluster.yml File instead --empty flag.

rke config --empty --name cluster.yml

This is what my cluster configuration file looks like – Don't copy and paste, just use it as a reference Create your own configuration.

# https://rancher.com/docs/rke/latest/en/config-options/
nodes:
- address: 10.10.1.10
  internal_address:
  hostname_override: rke-master-01
  role: [controlplane, etcd]
  user: rke
- address: 10.10.1.11
  internal_address:
  hostname_override: rke-master-02
  role: [controlplane, etcd]
  user: rke
- address: 10.10.1.12
  internal_address:
  hostname_override: rke-master-03
  role: [controlplane, etcd]
  user: rke
- address: 10.10.1.13
  internal_address:
  hostname_override: rke-worker-01
  role: [worker]
  user: rke
- address: 10.10.1.114
  internal_address:
  hostname_override: rke-worker-02
  role: [worker]
  user: rke

# using a local ssh agent 
# Using SSH private key with a passphrase - eval `ssh-agent -s` && ssh-add
ssh_agent_auth: true

#  SSH key that access all hosts in your cluster
ssh_key_path: ~/.ssh/id_rsa

# By default, the name of your cluster will be local
# Set different Cluster name
cluster_name: rke

# Fail for Docker version not supported by Kubernetes
ignore_docker_version: false

# prefix_path: /opt/custom_path

# Set kubernetes version to install: https://rancher.com/docs/rke/latest/en/upgrades/#listing-supported-kubernetes-versions
# Check with -> rke config --list-version --all
kubernetes_version:
# Etcd snapshots
services:
  etcd:
    backup_config:
      interval_hours: 12
      retention: 6
    snapshot: true
    creation: 6h
    retention: 24h

kube-api:
  # IP range for any services created on Kubernetes
  #  This must match the service_cluster_ip_range in kube-controller
  service_cluster_ip_range: 10.43.0.0/16
  # Expose a different port range for NodePort services
  service_node_port_range: 30000-32767
  pod_security_policy: false


kube-controller:
  # CIDR pool used to assign IP addresses to pods in the cluster
  cluster_cidr: 10.42.0.0/16
  # IP range for any services created on Kubernetes
  # # This must match the service_cluster_ip_range in kube-api
  service_cluster_ip_range: 10.43.0.0/16
  
kubelet:
  # Base domain for the cluster
  cluster_domain: cluster.local
  # IP address for the DNS service endpoint
  cluster_dns_server: 10.43.0.10
  # Fail if swap is on
  fail_swap_on: false
  # Set max pods to 150 instead of default 110
  extra_args:
    max-pods: 150

# Configure  network plug-ins 
# KE provides the following network plug-ins that are deployed as add-ons: flannel, calico, weave, and canal
# After you launch the cluster, you cannot change your network provider.
# Setting the network plug-in
network:
    plugin: canal
    options:
      canal_flannel_backend_type: vxlan

# Specify DNS provider (coredns or kube-dns)
dns:
  provider: coredns

# Currently, only authentication strategy supported is x509.
# You can optionally create additional SANs (hostnames or IPs) to
# add to the API server PKI certificate.
# This is useful if you want to use a load balancer for the
# control plane servers.
authentication:
  strategy: x509
  sans:
    - "k8s.computingforgeeks.com"

# Set Authorization mechanism
authorization:
    # Use `mode: none` to disable authorization
    mode: rbac

# Currently only nginx ingress provider is supported.
# To disable ingress controller, set `provider: none`
# `node_selector` controls ingress placement and is optional
ingress:
  provider: nginx
  options:
     use-forwarded-headers: "true"

In my configuration, the master node only has Wait with Control surface Roles. But you can schedule podcasts by adding Worker Roles.

role: [controlplane, etcd, worker]

Step 7: Deploy Kubernetes Cluster with RKE

Created cluster.yml File, you can deploy the cluster with simple commands.

rke up

This command assumes that the cluster.yml file is in the same directory as the directory where the command is run. If you use a different file name, specify it as shown below.

$ rke up --config ./rancher_cluster.yml

Using SSH Private Keys and Passwords-eval ssh-agent -s && ssh-add

Make sure the settings do not show any faults in the output:

......
INFO[0181] [sync] Syncing nodes Labels and Taints       
INFO[0182] [sync] Successfully synced nodes Labels and Taints 
INFO[0182] [network] Setting up network plugin: canal   
INFO[0182] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes 
INFO[0183] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes 
INFO[0183] [addons] Executing deploy job rke-network-plugin 
INFO[0189] [addons] Setting up coredns                  
INFO[0189] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes 
INFO[0189] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes 
INFO[0189] [addons] Executing deploy job rke-coredns-addon 
INFO[0195] [addons] CoreDNS deployed successfully..     
INFO[0195] [dns] DNS provider coredns deployed successfully 
INFO[0195] [addons] Setting up Metrics Server           
INFO[0195] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0196] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0196] [addons] Executing deploy job rke-metrics-addon 
INFO[0202] [addons] Metrics Server deployed successfully 
INFO[0202] [ingress] Setting up nginx ingress controller 
INFO[0202] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes 
INFO[0202] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes 
INFO[0202] [addons] Executing deploy job rke-ingress-controller 
INFO[0208] [ingress] ingress controller nginx deployed successfully 
INFO[0208] [addons] Setting up user addons              
INFO[0208] [addons] no user addons defined              
INFO[0208] Finished building Kubernetes cluster successfully

Step 8: Access your Kubernetes cluster

As part of the Kubernetes creation process, kubeconfig File created and written kube_config_cluster.yml.

Set the KUBECONFIG variable to the generated file.

export KUBECONFIG=./kube_config_cluster.yml

Check the list of nodes in the cluster.

$ kubectl get nodes        
NAME             STATUS   ROLES               AGE     VERSION
rke-master-01    Ready    controlplane,etcd   16m     v1.17.0
rke-master-02    Ready    controlplane,etcd   16m     v1.17.0
rke-master-03    Ready    controlplane,etcd   16m     v1.17.0
rke-worker-01    Ready    worker              6m33s   v1.17.0
rke-worker-02    Ready    worker              16m     v1.17.0

You can copy this file to $ HOME / .kube / config If you don't have another Kubernetes cluster.

Step 9: Install Kubernetes Dashboard

If you want to deploy containerized applications on Kubernetes through the dashboard, use the guide below.

How to install Kubernetes dashboard with NodePort

In the next guide, we will introduce the installation of Rancher – An Open source multi-cluster business process platform Makes it easy to manage and protect your enterprise Kubernetes.

Sidebar