Published
- 3 min read
Create Distributed Kubernetes Cluster Using Kubeadm on Ubuntu 20.04

Introduction
Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management. Originally designed by Google, Kubernetes is now maintained by the Cloud Native Computing Foundation.
In this guide, we will set up a distributed Kubernetes cluster using kubeadm on Ubuntu 20.04.
Kubernetes Cluster Architecture
We will implement a setup with one master node and three worker nodes.
Master Node
The master node runs the Kubernetes control plane processes, including the API server, scheduler, and controllers.
Key components:
- kube-apiserver: Exposes the Kubernetes API.
- kube-scheduler: Assigns pods to nodes.
- kube-controller-manager: Manages controllers like node and replication controllers.
Worker Nodes
Worker nodes contain the kubelet, kube-proxy, and a container runtime to manage and run workloads.
Key components:
- kubelet: Ensures pods are running and configured properly.
- kube-proxy: Maintains network configurations and enables service networking.
- Container runtime: Runs the containers (e.g., Docker).
Infrastructure Details
- Master Node: D2s_v3 (2 vCPUs, 8 GB RAM)
- Worker Node 1: D4s_v3 (4 vCPUs, 16 GB RAM)
- Worker Node 2: D4s_v3 (4 vCPUs, 16 GB RAM)
- Worker Node 3: D4s_v3 (4 vCPUs, 16 GB RAM)
All VMs are on the same private virtual network.
Step 1: Prepare Workstation
Ensure SSH access to all nodes and configure Ansible to manage them. Set up the Ansible inventory file:
sudo vim /etc/ansible/hosts
Add the following content:
[masters]
master ansible_host=master_ip ansible_user=k8sadmin
[workers]
worker1 ansible_host=worker1_ip ansible_user=k8sadmin
worker2 ansible_host=worker2_ip ansible_user=k8sadmin
worker3 ansible_host=worker3_ip ansible_user=k8sadmin
[all:vars]
ansible_python_interpreter=/usr/bin/python3
Replace master_ip
and worker*_ip
with your node IPs.
Step 2: Create a User with Sudo Privileges
Create an Ansible playbook to add a user with sudo privileges:
vim create-user.yml
Add the following content:
- hosts: all
become: yes
tasks:
- name: Create user kubeuser
user:
name: kubeuser
state: present
createhome: yes
shell: /bin/bash
- name: Grant passwordless sudo
lineinfile:
path: /etc/sudoers
line: 'kubeuser ALL=(ALL) NOPASSWD: ALL'
validate: 'visudo -cf %s'
- name: Set up authorized SSH keys
authorized_key:
user: kubeuser
key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
Run the playbook:
ansible-playbook -i hosts create-user.yml
Step 3: Install Kubernetes on All Nodes
Create a playbook to install Kubernetes components:
vim install-k8s.yml
Add the following content:
- hosts: all
become: yes
tasks:
- name: Install Docker
apt:
name: docker.io
state: present
update_cache: true
- name: Install Kubernetes packages
apt:
name:
- apt-transport-https
- kubelet=1.19.3-00
- kubeadm=1.19.3-00
state: present
update_cache: true
- name: Add Kubernetes apt repository
apt_repository:
repo: deb https://apt.kubernetes.io/ kubernetes-xenial main
state: present
- hosts: master
become: yes
tasks:
- name: Install kubectl
apt:
name: kubectl=1.19.3-00
state: present
Run the playbook:
ansible-playbook -i hosts install-k8s.yml
Step 4: Configure Master Node
Create a playbook to configure the master node:
vim config-master.yml
Add the following content:
- hosts: master
become: yes
tasks:
- name: Initialize Kubernetes cluster
shell: kubeadm init --pod-network-cidr=10.244.0.0/16
args:
creates: cluster_initialized.txt
- name: Set up kubeconfig
copy:
src: /etc/kubernetes/admin.conf
dest: /home/kubeuser/.kube/config
remote_src: yes
owner: kubeuser
- name: Install Flannel network
shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Run the playbook:
ansible-playbook -i hosts config-master.yml
Step 5: Configure Worker Nodes
Create a playbook to join worker nodes to the cluster:
vim config-workers.yml
Add the following content:
- hosts: master
become: yes
tasks:
- name: Get join command
shell: kubeadm token create --print-join-command
register: join_command
- hosts: workers
become: yes
tasks:
- name: Join the cluster
shell: "{{ hostvars['master'].join_command }}"
Run the playbook:
ansible-playbook -i hosts config-workers.yml
Step 6: Verify the Cluster
SSH into the master node and verify:
kubectl get nodes
Expected output:
NAME STATUS ROLES AGE VERSION
master Ready master 1d v1.19.3
worker1 Ready <none> 1d v1.19.3
worker2 Ready <none> 1d v1.19.3
worker3 Ready <none> 1d v1.19.3
Your Kubernetes cluster is now ready to deploy workloads!