Configuring a Multi-Node Kubernetes Cluster with the help of Ansible Roles
Some Pre-Requisite Details
What is Kubernetes?
Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.
What are pods in Kubernetes?
Pods are the smallest, most basic deployable objects in Kubernetes. A Pod represents a single instance of a running process in your cluster. Pods contain one or more containers, such as Docker containers. When a Pod runs multiple containers, the containers are managed as a single entity and share the Pod’s resources.
What is a Kubernetes Node?
A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. Each Node is managed by the Master. A Node can have multiple pods, and the Kubernetes master automatically handles scheduling the pods across the Nodes in the cluster.
What is a Multi-Node cluster in Kubernetes?
A multi-node cluster in Kubernetes is a setup with various nodes among which one is known as the master node and the rest are the worker nodes.
What is the use of Master node?
A master node is a node which controls and manages a set of worker nodes (workloads runtime) and resembles a cluster in Kubernetes. It also holds the nodes resources plan to determine the proper action for the triggered event. For example the scheduler would figure out which worker node will host a newly scheduled POD.
What are Ansible roles?
Roles provide a framework for fully independent, or interdependent collections of variables, tasks, files, templates, and modules.
In Ansible, the role is the primary mechanism for breaking a playbook into multiple files. This simplifies writing complex playbooks, and it makes them easier to reuse.
To know more about kubernetes and some use cases of Ansible, refer to these articles:-
Let’s start with the Task…..
Problem Statement
Task Description📄
🔅 Create an Ansible Role to Configure K8S Multi Node Cluster over AWS Cloud.
🔅 Create an Ansible Playbook to launch 3 AWS EC2 Instance
🔅 Create an Ansible Playbook to configure Docker over those instances.
🔅 Create a Playbook to configure K8S Master, K8S Worker Nodes on the above created EC2 Instances using kubeadm.
🔅 Upload all the YAML code over your GitHub Repository.
Solution Steps
Step 1:- Setup the Ansible configuration file and the inventory. My setup is built upon a dynamic inventory.
Configuration File ansible.cfg :-
To setup dynamic inventory for AWS EC2 instances ,download ec2.py and ec2.ini file from this link to our controller node using the wget command.
Install the SDK for AWS that is boto3
Make these 2 files executable by this command
chmod +x ec2.py
chmod +x ec2.ini
Export the following variables along with their values for our particular AWS account, in my case I have chosen region as ap-south-1 .After setting up the AWS instance playbook successfully, run the command ./ec2.py to check whether the instance are connected via the dynamic inventory or not.
This command will give us the tag names for our hosts for the Master and Worker playbooks.
Step 2:- Create 3 roles using the ansible-galaxy init command namely,
1. aws_ec2 :- To setup 3 AWS EC2 instances for the multi-node setup
2. k8s_master :- To setup kubernetes master on the instance.
3. k8s_worker :- To setup kubernetes worker on the instances.
Step3 :- Create a playbook on the role aws_ec2 with corresponding modules to launch 3 AWS EC2 instances. Run this playbook and after that run the ./ec.py command to verify the setup of dynamic inventory as explained above in step 1.
Vars file of playbook:-
Playbook for setup:-
Status at Web UI before running the playbook:-
Run the playbook through the role aws_ec2:-
Status at Web UI after the successful execution of the playbook:-
Step 4:- Create a playbook to setup master node of the kubernetes cluster with the following code in the tasks directory of the k8s_master role.
The join token for the slave will be displayed on the screen by the debug module.
Content of the k8s.conf file present in the files directory of the k8s_master role.
Step 5:- Create the playbook with the following code for the worker nodes in the tasks directory of the k8s_worker role.
The k8s.conf file here has same content as was there in the master node file.
Step 6:- Now, we will frame a collective playbook to execute all the created roles.
Here, I used the Name of the instances as tags to identify the hosts of the playbook. Also, I have used a prompt variable “token” , so as to enter the token generated by the master for the workers, to join the cluster.
Step 7:- Run the playbook and check the cluster status thereafter.
Master has been successfully configured and the join token has been generated. Copy the generated join token by master, in the prompt for the workers to join the cluster.
The message displayed by the debug module clearly conveys that our workers have successfully joined the cluster.
Now, let’s check the status of the cluster by logging in to our EC2 master node.
All the 3 nodes, that is, one master and two workers are up and running. Also, all the necessary pods for the kubernetes multi-node are running.
The Kubelet service is active and running.
Docker is also active and running.
Hence, the kubernetes multi-node cluster is configured successfully using the ansible roles.
The GitHub Repository for the task is:-
The link for my roles in the ansible galaxy are:-
ansible-galaxy install dracarys0511.aws_ec2
ansible-galaxy install dracarys0511.k8s_master
ansible-galaxy install dracarys0511.k8s_worker
To get these roles on your system run the above commands in the CLI.