Kubernetes cluster on RHEL 7

Dinesh Rivankar
5 min readJun 4, 2020

Kubernetes 101

Kubernetes is an open source container orchestration engine by Google. Kubernetes engine helps to automate deployment, scaling, and management of containerized applications. The containerization era had brought in a need to manage the containers which run the application and scale or handle failover when required.

Let’s take an example of the failover scenario to understand the requirement, in a traditional system we will need human intervention when there is a failover. The application will need to setup a separate process to monitor the health of the application and when unreachable will send a notification to the dev team to take immediate action. This all is troublesome if there is an issue with the application that monitors the health or the unavailability of the dev team at any given instance. Kubernetes comes handy in this situation where failovers are handled as a feature which is available out of box.

Some more features of Kubernetes are automated rollout and rollbacks, load balancing and service discovery, self-healing, resource allocation and many more.

Kubernetes cluster is a collection of nodes for application deployment. The nodes are categorised into 2 different categories namely Master node and Worker node. The Master node is responsible for managing and maintaining the cluster and Worker node will run the applications.

Will be using the below architecture for Kubernetes cluster creation

Let’s understand the terminologies in more detailed.

Flannel: It’s a virtual network which gives subnet to the host for container runtime. Advantage is to reduce the complexity of port mapping.

API Server: A REST based service which provides a shared state through which all components can interact e.g. pods controllers etc.

Scheduler: Scheduler is responsible to find the best node for a given Pod. Nodes that meet the scheduling requirement of pod are named as feasible nodes. However, if there are no feasible nodes available then the Pod remains in an unscheduled state and the scheduler will find the proper node.

etcd: etcd is a consistent and highly available key value store used as Kubernetes’ backing store for all cluster data.

Controller Manager: A control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state.

Pods: A pod is a group of containers which has a shared storage and defines how the containers will run. Pod is deployed as a container and always co-located to all the containers which will be deployed.

Containers: A unit of software packaged up with all its dependencies for running from one computing environment to another.

Best way to learn is to get your hands dirty!!

We will take 4 machines as described in the above architect diagram. Make sure all the machines have internet access and are not blocked by the corporate firewall for downloading packages / docker images.

- Kubernetes Master
- Kubernetes Worker 1
- Kubernetes Worker 2
- Kubernetes Worker 3

# 0 Prerequisites (All machines)

0.1 Disable swap off and SELINUX

swapoff -a
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

0.2 Install Docker engine

Follow the steps on the official website for installing and configuring docker. Current setup is tested with Docker version 19.03.8

If there is an Docker error saying Requires: container-selinux >= 2:2.74 , then run the below command

$ sudo yum install -y yum install http://mirror.centos.org/centos/7/extras/x86_64/Packages/container-selinux-2.107-1.el7_6.noarch.rpm

0.3 Install Kubernetes

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package
key.gpg
EOF
yum install -y kubelet kubeadm kubectl

0.4 Enable and Start docker and Kubernetes services

systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet

0.5 Check status of docker and Kubernetes services

Make sure both the services are active and running

systemctl status docker
systemctl status kubelet

# 1 Setup Master Node

1.0 Init Kubernetes

kubeadm init --pod-network-cidr=10.244.0.0/16

pod-network-cidr option specifies the range of IP addresses for the pod network.

Init command will print a token which will be used by the worker node to join the cluster. Copy this token for future use, don’t worry if you missed to copy. We can reprint the token with the below command.

sudo kubeadm token create --print-join-command

1.2 Copy the configs for your user

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

1.3 Setup Flannel network

sudo sysctl net.bridge.bridge-nf-call-iptables=1kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlkubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

# 2 Setup Work Nodes

2.1 Configure network bridge

sudo sysctl net.bridge.bridge-nf-call-iptables=1

2.2 Join Kubernetes cluster

Paste the copied token to join the cluster e.g.

join xx.xxx.xx.xxx:6443 — token up2d2l.kfk937ns76sh4d — discovery-token-ca-cert-hash sha256:judsdidy323y23nds6scc9f9ss0f9f99fv9s99c99fv9fvsnsn2223k4j454m3nj44n

# 3 Cluster testing

Login to the master node and run below command to get the details of the cluster.

kubectl get nodes

Output:

kubernetes-master Ready master 5d21h v1.18.3
kubernetes-worker-1 Ready <none> 5d21h v1.18.2
kubernetes-worker-2 Ready <none> 5d21h v1.18.2
kubernetes-worker-3 Ready <none> 5d21h v1.18.2

User below command to get node details

kubectl describe node <nodeName>

That’s it!! Your cluster is ready and you can deploy applications by creating a simple deploy configuration file. Below is the sample example.

# 4 Delete all the services from Worker node

Run below command on Master node to remove all the services from the Worker node. This command will delete all the Pods and containers created on the Worker node.

kubectl delete all --all

# 5 Remove Worker node from Cluster

Login to the worker node 3 which needs to be removed from the cluster.

5.1 Reset node

Below command will force the worker node to leave the cluster.

kubeadm reset -f

5.2 Remove from Master node.

Now login to the Master node and remove the entry from the cluster.

kubectl delete node kubernetes-worker-3kubectl get nodes

Output:

kubernetes-master Ready master 5d21h v1.18.3
kubernetes-worker-1 Ready <none> 5d21h v1.18.2
kubernetes-worker-2 Ready <none> 5d21h v1.18.2

Happy Learning !!!

--

--

Dinesh Rivankar

Architect, Blockchain Innovation Group. #TOGAF #ConfidentialComputing #Corda #Hyperledger #SmartContract