How to install and configure kubernetes cluster with cri-o runtime and cilium network plugin using ansible
Here in this article we will try to setup a kubernetes cluster using ansible playbook. In this setup we are going to use cri-o as the container runtime environment and cilium as the networking plugin.
Test Environment
Fedora 39 server (k8master, k8node)
Here is the project structure for kubernetes setup.
admin@fedser:kubernetes$ tree .
.
├── inventory
│ ├── hosts
├── linux_setup_controller.yml
├── linux_setup_worker.yml
├── README.md
└── roles
├── linux_add_worker_node
│ └── tasks
│ └── main.yml
├── linux_configure_cilium
│ ├── defaults
│ │ └── main.yml
│ └── tasks
│ └── main.yml
├── linux_configure_firewall
│ ├── defaults
│ │ └── main.yml
│ └── tasks
│ └── main.yml
├── linux_configure_kernel
│ ├── defaults
│ │ └── main.yml
│ └── tasks
│ └── main.yml
├── linux_export_config
│ ├── defaults
│ │ └── main.yml
│ └── tasks
│ └── main.yml
├── linux_initialize_kubernetes_cluster
│ ├── defaults
│ │ └── main.yml
│ └── tasks
│ └── main.yml
├── linux_install_container_runtime
│ └── tasks
│ └── main.yml
├── linux_install_kubernetes_tools
│ ├── tasks
│ │ └── main.yml
│ └── templates
│ └── kubernetes.repo
└── linux_ping
└── tasks
└── main.yml
NOTE: Role “linux_ping” can be used to validate the ssh connectivity with the managed hosts as per the inventory file. Here are the details.
admin@fedser:kubernetes$ cat roles/linux_ping/tasks/main.yml
- name: ansible ping pong validation
ping:
admin@fedser:kubernetes$ cat inventory/hosts
[controller]
k8master.stack.com
[worker]
k8node.stack.com
We will be setting up the kubernetes cluster on these two nodes. “k8master.stack.com” is going to be our control plane node and “k8node.stack.com” is going to be our worker node. Ensure that you have SSH service enabled and running on these servers and also make sure you have configured SSK key based authentication.
On both of these servers i have a user named “admin” who is part of the “wheel” administrator group.
Procedure
Step1: Configure Kernel
We need to load kernel modules ‘br_netfilter’ and ‘overlay’ along with kernel parameter updates for making the iptables see the traffic from the bridge network of the docker service. The following role “linux_configure_kernel” helps in setting up the same.
Let us set the default kernel parameters that we want to configure as shown below.
admin@fedser:kubernetes$ cat roles/linux_configure_kernel/defaults/main.yml
---
sysctl_config:
net.bridge.bridge-nf-call-iptables: 1
net.bridge.bridge-nf-call-ip6tables: 1
net.ipv4.ip_forward: 1
Here we are going to load the required kernel modules and update the kernel parameters.
admin@fedser:kubernetes$ cat roles/linux_configure_kernel/tasks/main.yml
---
- name: Load bridge network filter and overlay modprobe module
modprobe:
name: '{{ item }}'
state: present
persistent: present # adds module name to /etc/modules-load.d/ and params to /etc/modprobe.d/
with_items:
- br_netfilter
- overlay
- name: Update sysctl parameters
sysctl:
name: '{{ item.key }}'
value: '{{ item.value }}'
state: present
reload: yes
with_dict: '{{ sysctl_config }}'
Step2: Configure Firewall
As firewall is by default enabled on a Fedora Linux server, we will need to configure the control plane and worker node to listen on the mentioned ports for the respective components that are running on each node.
The following role “linux_configure_firewall” will configure the required ports for each component and restart the firewall service as shown below.
admin@fedser:kubernetes$ cat roles/linux_configure_firewall/defaults/main.yml
---
api_server_port: "6443"
etcd_port: "2379-2380"
kubelet_port: "10250"
scheduler_port: "10259"
conteroller_manager_port: "10257"
node_services_port: "30000-32767"
server_type: "master"
admin@fedser:kubernetes$ cat roles/linux_configure_firewall/tasks/main.yml
---
- name: expose ports on kubernetes master node
firewalld:
port: "{{api_server_port}}/tcp"
permanent: true
immediate: true
state: enabled
when: server_type == "master"
- name: expose ports on kubernetes master node
firewalld:
port: "{{etcd_port}}/tcp"
permanent: true
immediate: true
state: enabled
when: server_type == "master"
- name: expose ports on kubernetes master node
firewalld:
port: "{{kubelet_port}}/tcp"
permanent: true
immediate: true
state: enabled
when: server_type == "master"
- name: expose ports on kubernetes master node
firewalld:
port: "{{scheduler_port}}/tcp"
permanent: true
immediate: true
state: enabled
when: server_type == "master"
- name: expose ports on kubernetes master node
firewalld:
port: "{{conteroller_manager_port}}/tcp"
permanent: true
immediate: true
state: enabled
when: server_type == "master"
- name: expose ports on kubernetes worker node
firewalld:
port: "{{kubelet_port}}/tcp"
permanent: true
immediate: true
state: enabled
when: server_type == "node"
- name: expose ports on kubernetes worker node
firewalld:
port: "{{node_services_port}}/tcp"
permanent: true
immediate: true
state: enabled
when: server_type == "node"
- name: restart firewalld service
service:
name: firewalld
state: restarted
Step3: Install Container Runtime
In this step we are going to install the cri-o container runtime which will be used by the kubelet to run the containers on the worker nodes. The following role “linux_install_container_runtime” helps with the container runtime setup.
admin@fedser:kubernetes$ cat roles/linux_install_container_runtime/tasks/main.yml
---
- name: Install crio container runtime
dnf:
name: cri-o
state: present # ensure that package is not updated to latest version
- name: ensure crio service running
service:
name: crio
state: started
enabled: yes
Step4: Install Kubernetes tools
In this step we are going to use the role “linux_install_kubernetes_tools” to configure the kubernetes tools repository and install the required tools for setting up the cluster using kubeadm. We are also going to disable the swap and configure selinux to be in permissive mode.
admin@fedser:kubernetes$ cat roles/linux_install_kubernetes_tools/templates/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
admin@fedser:kubernetes$ cat roles/linux_install_kubernetes_tools/tasks/main.yml
---
- name: Copy the kubernetes repository file
template:
src: kubernetes.repo
dest: /etc/yum.repos.d/kubernetes.repo
owner: "root"
group: "root"
mode: 0644
- name: Set SELinux in permissive mode
selinux:
policy: targeted
state: permissive
- name: Disable SWAP since kubernetes can't work with swap enabled (1/2)
shell: |
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
- name: Disable zram specific to fedora
dnf:
name: zram-generator-defaults
state: absent
- name: Install kubeadm, kubectl and kubelet
dnf:
name: '{{ item }}'
state: present
disable_excludes: kubernetes
with_items:
- kubeadm
- kubectl
- kubelet
- name: Enable and Start kubelet service
service:
name: kubelet
state: started
enabled: yes
Step5: Initialize Kubernetes Cluster
Now we are ready to initialize the cluster using the following role “linux_initialize_kubernetes_cluster” with the cilium networking cidr range as configured in the defaults.
admin@fedser:kubernetes$ cat roles/linux_initialize_kubernetes_cluster/defaults/main.yml
---
#pod_network_cidr: "10.244.0.0/16" # for flannel
pod_network_cidr: "10.1.1.0/24" # for cilium
admin@fedser:kubernetes$ cat roles/linux_initialize_kubernetes_cluster/tasks/main.yml
---
- name: Initialize kubernetes cluster
shell: |
kubeadm init --pod-network-cidr={{pod_network_cidr}}
register: init_output
- name: Print the initialization output
debug: msg="{{ init_output }}"}
Step6: Export Cluster Config
Once the kubernetes cluster is initialized successfully. We can export the admin.conf to our home directory for authenticating and communicating with the cluster.
admin@fedser:kubernetes$ cat roles/linux_export_config/defaults/main.yml
---
kubernetes_admin_user: admin
kubernetes_admin_group: admin
admin@fedser:kubernetes$ cat roles/linux_export_config/tasks/main.yml
---
- name: ensure .kube directory exists
file:
path: /home/{{kubernetes_admin_user}}/.kube
state: directory
mode: '0755'
- name: Copy kubectl config
copy:
src: /etc/kubernetes/admin.conf
dest: /home/{{kubernetes_admin_user}}/.kube/config
owner: admin
group: admin
mode: '0644'
remote_src: yes
Step7: Configure Cilium Networking
Now its time to setup kubernetes container networking by installing the cilium netwoking plugin as shown below using the role “linux_configure_cilium”.
admin@fedser:kubernetes$ cat roles/linux_configure_cilium/defaults/main.yml
---
cilium_cli_version: v0.15.18
cli_arch: amd64
cilium_repo: https://github.com/cilium/cilium-cli/releases/download/{{cilium_cli_version}}/cilium-linux-{{cli_arch}}.tar.gz
admin@fedser:kubernetes$ cat roles/linux_configure_cilium/tasks/main.yml
---
- name: downaload and extract cilium
unarchive:
src: "{{cilium_repo}}"
dest: /usr/local/bin
remote_src: yes
- name: install cilium
become: true
become_user: admin
command: "/usr/local/bin/cilium install"
register: kubernetes_cilium_install
- name: print the cilium install status
debug:
msg: "{{ kubernetes_cilium_install.stdout_lines }}"
Step8: Generate Worker Node Joining Token
Once the kubernetes cluster is initialized and the networking setup is completed, we can generate the worker node joining token using the role “linux_generate_token” as shown below. We are copying the generated token locally so that we can further execute it on the worker node for joining it to the cluster.
admin@fedser:kubernetes$ cat roles/linux_generate_token/tasks/main.yml
---
- name: get the token for joining the worker nodes
shell: kubeadm token create --print-join-command
register: kubernetes_join_command
- name: print the kubernetes node join command
debug:
msg: "{{ kubernetes_join_command.stdout_lines }}"
- name: copy join command to local file.
local_action: copy content="{{ kubernetes_join_command.stdout_lines[0] }}" dest="/tmp/kubernetes_join_command" mode=0777
Step9: Add Worker Node
Now its time to add the worker node to the cluster using the following role “linux_add_worker_node” as shown below.
admin@fedser:kubernetes$ cat roles/linux_add_worker_node/tasks/main.yml
- name: copy join command from Ansible host to the worker nodes.
copy:
src: /tmp/kubernetes_join_command
dest: /tmp/kubernetes_join_command
mode: 0777
- name: join the worker nodes to the cluster.
command: bash /tmp/kubernetes_join_command
register: join_status
- name: Print the join status
debug: msg="{{ join_status }}"
Step10: ControlPlane and Worker Node Playbook
Here are the playbooks for setting up the controlplane and worker node as shown below.
admin@fedser:kubernetes$ cat linux_setup_controller.yml
---
- hosts: "controller"
serial: 1
become: true
become_user: root
roles:
- { role: "linux_ping", tags: "linux_ping" }
- { role: "linux_configure_kernel", tags: "linux_configure_kernel" }
- { role: "linux_configure_firewall", tags: "linux_configure_firewall" }
- { role: "linux_install_container_runtime", tags: "linux_install_container_runtime" }
- { role: "linux_install_kubernetes_tools", tags: "linux_install_kubernetes_tools" }
- { role: "linux_initialize_kubernetes_cluster", tags: "linux_initialize_kubernetes_cluster" }
- { role: "linux_export_config", tags: "linux_export_config" } # export the cluster config to admin user home directory for management
- { role: "linux_configure_cilium", tags: "linux_configure_cilium", become: true, become_user: admin }
- { role: "linux_generate_token", tags: "linux_generate_token", become: true, become_user: admin }
admin@fedser:kubernetes$ cat linux_setup_worker.yml
---
- hosts: "worker"
serial: 1
become: true
become_user: root
roles:
- { role: "linux_ping", tags: "linux_ping" }
- { role: "linux_configure_kernel", tags: "linux_configure_kernel" }
- { role: "linux_configure_firewall", tags: "linux_configure_firewall" }
- { role: "linux_install_container_runtime", tags: "linux_install_container_runtime" }
- { role: "linux_install_kubernetes_tools", tags: "linux_install_kubernetes_tools" }
- { role: "linux_add_worker_node", tags: "linux_add_worker_node" }
Step11: README Instructions
admin@fedser:kubernetes$ cat README.md
# Instructions for controlplane execution
ansible-playbook linux_setup_controller.yml -i inventory/hosts --tags "linux_ping" -v
ansible-playbook linux_setup_controller.yml -i inventory/hosts --tags "linux_configure_kernel" -v
ansible-playbook linux_setup_controller.yml -i inventory/hosts --tags "linux_configure_firewall" --extra-vars "server_type=master" -v
ansible-playbook linux_setup_controller.yml -i inventory/hosts --tags "linux_install_container_runtime" -v
ansible-playbook linux_setup_controller.yml -i inventory/hosts --tags "linux_install_kubernetes_tools" -v
ansible-playbook linux_setup_controller.yml -i inventory/hosts --tags "linux_initialize_kubernetes_cluster" -v
ansible-playbook linux_setup_controller.yml -i inventory/hosts --tags "linux_export_config" -v
ansible-playbook linux_setup_controller.yml -i inventory/hosts --tags "linux_configure_cilium" -v
ansible-playbook linux_setup_controller.yml -i inventory/hosts --tags "linux_generate_token" -v
# Instructions for worker node execution
ansible-playbook linux_setup_worker.yml -i inventory/hosts --tags "linux_ping" -v
ansible-playbook linux_setup_worker.yml -i inventory/hosts --tags "linux_configure_kernel" -v
ansible-playbook linux_setup_worker.yml -i inventory/hosts --tags "linux_configure_firewall" --extra-vars "server_type=node" -v
ansible-playbook linux_setup_worker.yml -i inventory/hosts --tags "linux_install_container_runtime" -v
ansible-playbook linux_setup_worker.yml -i inventory/hosts --tags "linux_install_kubernetes_tools" -v
ansible-playbook linux_setup_worker.yml -i inventory/hosts --tags "linux_add_worker_node" -v
Step12: Validate Kubernetes Cluster Setup
We can validate the kubernetes cluster by checking the nodes and components status as below.
admin@k8master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8master.stack.com Ready control-plane 28d v1.28.5
k8node.stack.com Ready <none> 28d v1.28.5
admin@k8master:~$ kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy ok
Hope you enjoyed reading this article. Thank you..
Leave a Reply
You must be logged in to post a comment.