How to provision a Kubernetes Cluster with kubeadm and ansible on Fedora 35 workstation
Here in this setup we are going to provision a kubernetes cluster with 1 Master and 1 Worker node. In order to provision a kubernetes cluster using Ansible, we need to make sure that the Ansible controller node is able to ssh onto both the Master and Worker node using ssh key based authentication (ie. passwordless authentication).
Test Environment
Fedora 35 workstation installed Master and Worker Node
Server Setup
Here are the details of the servers which i am using for this provisioning. Ansible Controller is my local machine with fedora 35 installed and the other two nodes ‘fedkubemaster’ and ‘fedkubenode’ are virtual machines provisioned using KVM virtualization.
- Ansible Controller – fedser32 (user – admin)
- Kubernetes Control Plane – fedkubemaster (user – admin)
- Kubernetes Worker Node – fedkubenode (user – admin)
[admin@fedser32 ansible]$ virsh list --all
Id Name State
--------------------------------
4 fedkubemaster running
5 fedkubenode running
If you are interested in watching the video. Here is the youtube video on the same step by step procedure outline below.
Procedure
Step1: Start and Enable the sshd service on Master and Worker Node
We need to make sure that the nodes which we want to manage using ansible have the sshd service started and enabled so we can ssh to these managed nodes from remote host. Also if you dont find the sshd service itself not available you might have to install the openssh-server package to make the linux system a SSH server.
[admin@fedkubemaster ~]$ sudo systemctl start sshd.service
[admin@fedkubemaster ~]$ sudo systemctl enable sshd.service
[admin@fedkubenode ~]$ sudo systemctl start sshd.service
[admin@fedkubemaster ~]$ sudo systemctl enable sshd.service
Fedora now enables swap on ZRAM by default. To disable it permanently, we need to remove the package which generates its configuration and restart the managed nodes.
[admin@fedkubemaster ~]$ sudo dnf remove zram-generator-defaults
Restart the ‘fedkubemaster’ and ‘fedkubenode’ nodes to get the swap disabled permanently.
Step2: Configure ssh key based authentication for managed nodes
As we want to provision the kubernetes master and worker nodes (ie. managed nodes) using ansible. We need to make sure that we are able to ssh to the managed nodes using ssh key pairs. For this we need to generate the ssh key pair and copy the public key to the remote managed nodes.
For this ssh key based setup i have already created an user ‘admin’ in the ‘fedkubemaster’ and ‘fedkubenode’ managed nodes. Also on my Ansible controller i have the user ‘admin’ with which i am managing the nodes. Here is a basic bash script which we can use to generate the ssh key pair and copy them to the managed nodes.
The nodes FQDN’s are hard coded in the script. You can update these FQDN with required server name that you want to manage.
[admin@fedser32 ansible]$ cat sshkeypairsetup.sh
#!/bin/bash
echo "Home Directory : $HOME"
user=`echo $HOME | awk -F"/" '{print $NF}'`
echo $user
nodes=("fedkubemaster" "fedkubenode") # update FQDN with required server name that you want to manage
#nodes=("fedser35")
copypublickey()
{
echo "Copy the public ssh key for user : $user to remote nodes to manage"
for node in ${nodes[@]}; do
ssh-copy-id -i $HOME/.ssh/id_rsa.pub $user@$node
done
}
if [[ -f $HOME/.ssh/id_rsa ]] && [[ -f $HOME/.ssh/id_rsa.pub ]]; then
echo "SSH key pair avaiable for user : $user"
copypublickey
else
echo "Generate SSH keypair for user : $user"
ssh-keygen -t rsa -q -N '' -f $HOME/.ssh/id_rsa
copypublickey
fi
[admin@fedser32 ansible]$ ./sshkeypairsetup.sh
Once the public key’s have been copied to the managed nodes. You can verify you ssh access and it should not prompt for a password.
[admin@fedser32 ansible]$ ssh admin@fedkubemaster
Last login: Fri Jan 28 11:10:32 2022 from 192.168.122.1
[admin@fedkubemaster ~]$
[admin@fedser32 ansible]$ ssh admin@fedkubenode
Last login: Fri Jan 28 11:08:23 2022
[admin@fedkubenode ~]$
Step3: Create an ansible inventory file to manage the kubernetes nodes
Here i am preparing a customised inventory.txt file with the ‘fedkubemaster’ and ‘fedkubenode’ entries which i would like to manage.
[admin@fedser32 kubernetes_setup]$ cat inventory.txt
[kubernetes]
fedkubemaster
fedkubenode
Verify you are able to connect to the remote managed nodes and execute an ansible module for validation.
[admin@fedser32 kubernetes_setup]$ ansible kubernetes -i ./inventory.txt -a "/bin/echo 'Hello Ansible'"
fedkubemaster | CHANGED | rc=0 >>
Hello Ansible
fedkubenode | CHANGED | rc=0 >>
Hello Ansible
Step4: Install Docker container runtime on managed nodes
On each of the managed node we need to install a container runtime. Here we are going to install the Docker container runtime. Below is the yml definition in which we are configuring the docker repository and installing the required packages and starting up the docker service.
[admin@fedser32 kubernetes_setup]$ cat install_docker.yml
---
- name: Docker runtime setup on managed nodes
hosts: kubernetes
become: true
become_user: root
tasks:
- name: Install pre-requisite packages
dnf: name={{ item }} state=present
with_items:
- dnf-plugins-core
- name: Add Docker repository
command: "dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo"
- name: Install Docker Engine
dnf: name={{ item }} state=present
with_items:
- docker-ce
- docker-ce-cli
- containerd.io
- name: Ensure group "docker" exists
group:
name: docker
state: present
- name: Add the user 'admin' with to group of 'docker'
user:
name: admin
group: admin
groups: docker
append: yes
- name: Reload systemd daemon
systemd:
daemon_reload: yes
- name: Enable and Start Docker service
service:
name: docker
enabled: yes
state: started
- name: Validate Docker installation
command: "docker run hello-world"
[admin@fedser32 kubernetes_setup]$ ansible-playbook -i inventory.txt install_docker.yml -K
BECOME password:
PLAY [Docker runtime setup on managed nodes] *****************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************
ok: [fedkubemaster]
ok: [fedkubenode]
TASK [Install pre-requisite packages] ************************************************************************************************************************
ok: [fedkubemaster] => (item=['dnf-plugins-core'])
ok: [fedkubenode] => (item=['dnf-plugins-core'])
TASK [Add Docker repository] *********************************************************************************************************************************
[WARNING]: Consider using the dnf module rather than running 'dnf'. If you need to use command because dnf is insufficient you can add 'warn: false' to this
command task or set 'command_warnings=False' in ansible.cfg to get rid of this message.
changed: [fedkubenode]
changed: [fedkubemaster]
TASK [Install Docker Engine] *********************************************************************************************************************************
ok: [fedkubemaster] => (item=['docker-ce', 'docker-ce-cli', 'containerd.io'])
ok: [fedkubenode] => (item=['docker-ce', 'docker-ce-cli', 'containerd.io'])
TASK [Ensure group "docker" exists] **************************************************************************************************************************
ok: [fedkubemaster]
ok: [fedkubenode]
TASK [Add the user 'admin' with to group of 'docker'] ********************************************************************************************************
ok: [fedkubemaster]
ok: [fedkubenode]
TASK [Reload systemd daemon] *********************************************************************************************************************************
ok: [fedkubemaster]
ok: [fedkubenode]
TASK [Enable and Start Docker service] ***********************************************************************************************************************
ok: [fedkubenode]
ok: [fedkubemaster]
TASK [Validate Docker installation] **************************************************************************************************************************
changed: [fedkubemaster]
changed: [fedkubenode]
PLAY RECAP ***************************************************************************************************************************************************
fedkubemaster : ok=9 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
fedkubenode : ok=9 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Step5: Load kernel modules and update kernel parameters
We need to load kernel modules ‘br_netfilter’ and ‘overlay’ along with kernel parameter updates for making the iptables see the traffic from the bridge network of the docker service. Here is the yml definition file which we are going to use to load the required modules and update the kernel parameters.
[admin@fedser32 kubernetes_setup]$ cat kernel_setup.yml
---
- name: Module and Kernel Parameter setup
hosts: kubernetes
become: true
become_user: root
vars:
sysctl_config:
net.bridge.bridge-nf-call-iptables: 1
net.bridge.bridge-nf-call-ip6tables: 1
net.ipv4.ip_forward: 1
tasks:
- name: Load bridge network filter and overlay modprobe module
modprobe:
name: '{{ item }}'
state: present
with_items:
- br_netfilter
- overlay
- name: Update sysctl parameters
sysctl:
name: '{{ item.key }}'
value: '{{ item.value }}'
state: present
reload: yes
with_dict: '{{ sysctl_config }}'
[admin@fedser32 kubernetes_setup]$ ansible-playbook -i inventory.txt kernel_setup.yml -K
BECOME password:
PLAY [Module and Kernel Parameter setup] *********************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************
ok: [fedkubenode]
ok: [fedkubemaster]
TASK [Load bridge network filter and overlay modprobe module] ************************************************************************************************
ok: [fedkubemaster] => (item=br_netfilter)
ok: [fedkubenode] => (item=br_netfilter)
ok: [fedkubenode] => (item=overlay)
ok: [fedkubemaster] => (item=overlay)
TASK [Update sysctl parameters] ******************************************************************************************************************************
ok: [fedkubenode] => (item={'key': 'net.bridge.bridge-nf-call-iptables', 'value': 1})
ok: [fedkubemaster] => (item={'key': 'net.bridge.bridge-nf-call-iptables', 'value': 1})
ok: [fedkubenode] => (item={'key': 'net.bridge.bridge-nf-call-ip6tables', 'value': 1})
ok: [fedkubemaster] => (item={'key': 'net.bridge.bridge-nf-call-ip6tables', 'value': 1})
ok: [fedkubenode] => (item={'key': 'net.ipv4.ip_forward', 'value': 1})
ok: [fedkubemaster] => (item={'key': 'net.ipv4.ip_forward', 'value': 1})
PLAY RECAP ***************************************************************************************************************************************************
fedkubemaster : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
fedkubenode : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Step6: Installing kubeadm, kubelet and kubectl
Now, we are ready to install the kubeadm, kubelet and kubectl tools. For fedora we need to update selinux to permissive mode.
[admin@fedser32 kubernetes_setup]$ cat install_kubernetes.yml
---
- name: Install kubernetes tools
hosts: kubernetes
become: true
become_user: root
tasks:
- name: Copy the kubernetes repository file
copy:
src: /home/admin/middleware/stack/ansible/playbooks/kubernetes_setup/kubernetes.repo
dest: /etc/yum.repos.d/kubernetes.repo
- name: Set SELinux in permissive mode
selinux:
policy: targeted
state: permissive
- name: Install kubeadm, kubectl and kubelet
dnf:
name: '{{ item }}'
state: present
disable_excludes: kubernetes
with_items:
- kubeadm
- kubectl
- kubelet
- name: Enable and Start kubelet service
service:
name: kubelet
state: started
enabled: yes
- name: Disable SWAP since kubernetes can't work with swap enabled
shell: |
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
- name: Disable zram specific to fedora
dnf:
name: zram-generator-defaults
state: absent
[admin@fedser32 kubernetes_setup]$ ansible-playbook -i inventory.txt install_kubernetes.yml -K
BECOME password:
PLAY [Install kubernetes tools] ******************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************
ok: [fedkubenode]
ok: [fedkubemaster]
TASK [Copy the kubernetes repository file] *******************************************************************************************************************
changed: [fedkubemaster]
changed: [fedkubenode]
TASK [Set SELinux in permissive mode] ************************************************************************************************************************
ok: [fedkubemaster]
ok: [fedkubenode]
TASK [Install kubeadm, kubectl and kubelet] ******************************************************************************************************************
changed: [fedkubemaster] => (item=['kubeadm', 'kubectl', 'kubelet'])
changed: [fedkubenode] => (item=['kubeadm', 'kubectl', 'kubelet'])
TASK [Enable and Start kubelet service] **********************************************************************************************************************
changed: [fedkubemaster]
changed: [fedkubenode]
TASK [Disable SWAP since kubernetes can't work with swap enabled (1/2)] **************************************************************************************
changed: [fedkubemaster]
changed: [fedkubenode]
TASK [Disable zram specific to fedora] ***********************************************************************************************************************
ok: [fedkubenode]
ok: [fedkubemaster]
PLAY RECAP ***************************************************************************************************************************************************
fedkubemaster : ok=7 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
fedkubenode : ok=7 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Step7: Initialise the kubernetes cluster
We are now ready to initialise our kubernetes cluster using the kubeadm tool. Here is my yml definition file which we will use to initialise the kubernetes cluster with pod cidr range provided. Once the cluster is initialised successfully you will get the information to configure your kubectl configuration and the command to join a worker node to the kubernetes cluster.
[admin@fedser32 kubernetes_setup]$ cat initialize_kubernetes_cluster.yml
---
- name: Install kubernetes tools
hosts: fedkubemaster
become: true
become_user: root
tasks:
- name: Initialize kubernetes cluster
shell: |
kubeadm init --pod-network-cidr=10.244.0.0/16
register: init_output
- name: Print the initialization output
debug: msg="{{ init_output }}"
[admin@fedser32 kubernetes_setup]$ ansible-playbook -i inventory.txt initialize_kubernetes_cluster.yml -K
BECOME password:
PLAY [Install kubernetes tools] ******************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************
ok: [fedkubemaster]
TASK [Initialize kubernetes cluster] *************************************************************************************************************************
changed: [fedkubemaster]
TASK [Print the initialization output] ***********************************************************************************************************************
ok: [fedkubemaster] => {
"msg": {
"changed": true,
"cmd": "kubeadm init --pod-network-cidr=10.244.0.0/16\n",
"delta": "0:00:11.197485",
"end": "2022-01-28 16:43:28.034899",
"failed": false,
"rc": 0,
"start": "2022-01-28 16:43:16.837414",
"stderr": "\t[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly",
"stderr_lines": [
"\t[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly"
],
"stdout": "[init] Using Kubernetes version: v1.23.3\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/etc/kubernetes/pki\"\n[certs] Generating \"ca\" certificate and key\n[certs] Generating \"apiserver\" certificate and key\n[certs] apiserver serving cert is signed for DNS names [fedkubemaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.122.161]\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [fedkubemaster localhost] and IPs [192.168.122.161 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [fedkubemaster localhost] and IPs [192.168.122.161 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s\n[apiclient] All control plane components are healthy after 6.003001 seconds\n[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace\n[kubelet] Creating a ConfigMap \"kubelet-config-1.23\" in namespace kube-system with the configuration for the kubelets in the cluster\nNOTE: The \"kubelet-config-1.23\" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just \"kubelet-config\". Kubeadm upgrade will handle this transition transparently.\n[upload-certs] Skipping phase. Please see --upload-certs\n[mark-control-plane] Marking the node fedkubemaster as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]\n[mark-control-plane] Marking the node fedkubemaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]\n[bootstrap-token] Using token: 1gvrr5.9cjofpa0hvvszxlv\n[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles\n[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes\n[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials\n[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token\n[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster\n[bootstrap-token] Creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace\n[kubelet-finalize] Updating \"/etc/kubernetes/kubelet.conf\" to point to a rotatable kubelet client certificate and key\n[addons] Applied essential addon: CoreDNS\n[addons] Applied essential addon: kube-proxy\n\nYour Kubernetes control-plane has initialized successfully!\n\nTo start using your cluster, you need to run the following as a regular user:\n\n mkdir -p $HOME/.kube\n sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config\n sudo chown $(id -u):$(id -g) $HOME/.kube/config\n\nAlternatively, if you are the root user, you can run:\n\n export KUBECONFIG=/etc/kubernetes/admin.conf\n\nYou should now deploy a pod network to the cluster.\nRun \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:\n https://kubernetes.io/docs/concepts/cluster-administration/addons/\n\nThen you can join any number of worker nodes by running the following on each as root:\n\nkubeadm join 192.168.122.161:6443 --token 1gvrr5.9cjofpa0hvvszxlv \\\n\t--discovery-token-ca-cert-hash sha256:0f438a6373cb70f5b1cbd922bf4aff69e482a5c1513dda297e24636b33b27667 ",
"stdout_lines": [
"[init] Using Kubernetes version: v1.23.3",
"[preflight] Running pre-flight checks",
"[preflight] Pulling images required for setting up a Kubernetes cluster",
"[preflight] This might take a minute or two, depending on the speed of your internet connection",
"[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'",
"[certs] Using certificateDir folder \"/etc/kubernetes/pki\"",
"[certs] Generating \"ca\" certificate and key",
"[certs] Generating \"apiserver\" certificate and key",
"[certs] apiserver serving cert is signed for DNS names [fedkubemaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.122.161]",
"[certs] Generating \"apiserver-kubelet-client\" certificate and key",
"[certs] Generating \"front-proxy-ca\" certificate and key",
"[certs] Generating \"front-proxy-client\" certificate and key",
"[certs] Generating \"etcd/ca\" certificate and key",
"[certs] Generating \"etcd/server\" certificate and key",
"[certs] etcd/server serving cert is signed for DNS names [fedkubemaster localhost] and IPs [192.168.122.161 127.0.0.1 ::1]",
"[certs] Generating \"etcd/peer\" certificate and key",
"[certs] etcd/peer serving cert is signed for DNS names [fedkubemaster localhost] and IPs [192.168.122.161 127.0.0.1 ::1]",
"[certs] Generating \"etcd/healthcheck-client\" certificate and key",
"[certs] Generating \"apiserver-etcd-client\" certificate and key",
"[certs] Generating \"sa\" key and public key",
"[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"",
"[kubeconfig] Writing \"admin.conf\" kubeconfig file",
"[kubeconfig] Writing \"kubelet.conf\" kubeconfig file",
"[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file",
"[kubeconfig] Writing \"scheduler.conf\" kubeconfig file",
"[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
"[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
"[kubelet-start] Starting the kubelet",
"[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"",
"[control-plane] Creating static Pod manifest for \"kube-apiserver\"",
"[control-plane] Creating static Pod manifest for \"kube-controller-manager\"",
"[control-plane] Creating static Pod manifest for \"kube-scheduler\"",
"[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"",
"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\". This can take up to 4m0s",
"[apiclient] All control plane components are healthy after 6.003001 seconds",
"[upload-config] Storing the configuration used in ConfigMap \"kubeadm-config\" in the \"kube-system\" Namespace",
"[kubelet] Creating a ConfigMap \"kubelet-config-1.23\" in namespace kube-system with the configuration for the kubelets in the cluster",
"NOTE: The \"kubelet-config-1.23\" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just \"kubelet-config\". Kubeadm upgrade will handle this transition transparently.",
"[upload-certs] Skipping phase. Please see --upload-certs",
"[mark-control-plane] Marking the node fedkubemaster as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]",
"[mark-control-plane] Marking the node fedkubemaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]",
"[bootstrap-token] Using token: 1gvrr5.9cjofpa0hvvszxlv",
"[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles",
"[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes",
"[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials",
"[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token",
"[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster",
"[bootstrap-token] Creating the \"cluster-info\" ConfigMap in the \"kube-public\" namespace",
"[kubelet-finalize] Updating \"/etc/kubernetes/kubelet.conf\" to point to a rotatable kubelet client certificate and key",
"[addons] Applied essential addon: CoreDNS",
"[addons] Applied essential addon: kube-proxy",
"",
"Your Kubernetes control-plane has initialized successfully!",
"",
"To start using your cluster, you need to run the following as a regular user:",
"",
" mkdir -p $HOME/.kube",
" sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config",
" sudo chown $(id -u):$(id -g) $HOME/.kube/config",
"",
"Alternatively, if you are the root user, you can run:",
"",
" export KUBECONFIG=/etc/kubernetes/admin.conf",
"",
"You should now deploy a pod network to the cluster.",
"Run \"kubectl apply -f [podnetwork].yaml\" with one of the options listed at:",
" https://kubernetes.io/docs/concepts/cluster-administration/addons/",
"",
"Then you can join any number of worker nodes by running the following on each as root:",
"",
"kubeadm join 192.168.122.161:6443 --token 1gvrr5.9cjofpa0hvvszxlv \\",
"\t--discovery-token-ca-cert-hash sha256:0f438a6373cb70f5b1cbd922bf4aff69e482a5c1513dda297e24636b33b27667 "
]
}
}
PLAY RECAP ***************************************************************************************************************************************************
fedkubemaster : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Step8: Configure the kubectl configuration
As per the instructions provided in the kubernetes cluster initialisation output we are going to copy the admin.conf to ./kube/config for configuring our kubectl client to communicate with the kubernetes cluster.
[admin@fedser32 kubernetes_setup]$ cat configure_kubectl.yml
---
- name: Configure kubectl
hosts: fedkubemaster
tasks:
- name: Create a directory if it does not exist
file:
path: $HOME/.kube
state: directory
mode: '0755'
- name: copies admin.conf to user's kube config
become: true
become_user: root
copy:
src: /etc/kubernetes/admin.conf
dest: /home/admin/.kube/config
remote_src: yes
owner: admin
group: admin
[admin@fedser32 kubernetes_setup]$ ansible-playbook -i inventory.txt configure_kubectl.yml -K
BECOME password:
PLAY [Configure kubectl] *************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************
ok: [fedkubemaster]
TASK [Create a directory if it does not exist] ***************************************************************************************************************
ok: [fedkubemaster]
TASK [copies admin.conf to user's kube config] ***************************************************************************************************************
changed: [fedkubemaster]
PLAY RECAP ***************************************************************************************************************************************************
fedkubemaster : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Step8: Apply the flannel networking policy
We need to apply the networking policy for our kubernetes cluster. Here i am using the flannel networking policy addon for setting up my cluster network. Here is the yml definition file for the same. Also i am trying to print the join command which i will use to join the worker node. We are storing the join command into a file and copying it locally to the controller node.
[admin@fedser32 kubernetes_setup]$ cat deploy_flannel.yml
---
- name: Deploy flannel networking policy
hosts: fedkubemaster
tasks:
- name: Apply the flannel networking policy defintion
shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
- name: Get the token for joining the worker nodes
shell: kubeadm token create --print-join-command
register: kubernetes_join_command
- name: Print the kubernetes node join command
debug:
msg: "{{ kubernetes_join_command.stdout }}"
- name: Copy join command to local file.
local_action: copy content="{{ kubernetes_join_command.stdout_lines[0] }}" dest="/tmp/kubernetes_join_command" mode=0777
[admin@fedser32 kubernetes_setup]$ ansible-playbook -i inventory.txt deploy_flannel.yml
PLAY [Deploy flannel networking policy] **********************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************
ok: [fedkubemaster]
TASK [Apply the flannel networking policy defintion] *********************************************************************************************************
changed: [fedkubemaster]
TASK [Get the token for joining the worker nodes] ************************************************************************************************************
changed: [fedkubemaster]
TASK [Print the kubernetes node join command] ****************************************************************************************************************
ok: [fedkubemaster] => {
"msg": "kubeadm join 192.168.122.161:6443 --token x5kbfk.fx4tc0idrqumldgt --discovery-token-ca-cert-hash sha256:0f438a6373cb70f5b1cbd922bf4aff69e482a5c1513dda297e24636b33b27667 "
}
TASK [Copy join command to local file.] **********************************************************************************************************************
changed: [fedkubemaster -> localhost]
PLAY RECAP ***************************************************************************************************************************************************
fedkubemaster : ok=5 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Step9: Join the worker node
Now, let’s join the worker node by copying the command that we saved into the local ansible controller node to worker node and executing it.
[admin@fedser32 kubernetes_setup]$ cat join_worker_node.yml
---
- name: Join the worker node
hosts: fedkubenode
become: yes
become_user: root
tasks:
- name: Copy join command from Ansible host to the worker nodes.
copy:
src: /tmp/kubernetes_join_command
dest: /tmp/kubernetes_join_command
mode: 0777
- name: Join the Worker nodes to the cluster.
command: bash /tmp/kubernetes_join_command
register: join_status
- name: Print the join status
debug: msg="{{ join_status}}"
[admin@fedser32 kubernetes_setup]$ ansible-playbook -i inventory.txt join_worker_node.yml -K
BECOME password:
PLAY [Join the worker node] **********************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************
ok: [fedkubenode]
TASK [Copy join command from Ansiblehost to the worker nodes.] ***********************************************************************************************
ok: [fedkubenode]
TASK [Join the Worker nodes to the cluster.] *****************************************************************************************************************
changed: [fedkubenode]
TASK [Print the join status] *********************************************************************************************************************************
ok: [fedkubenode] => {
"msg": {
"changed": true,
"cmd": [
"bash",
"/tmp/kubernetes_join_command"
],
"delta": "0:00:07.031333",
"end": "2022-01-28 17:18:47.804148",
"failed": false,
"rc": 0,
"start": "2022-01-28 17:18:40.772815",
"stderr": "W0128 17:18:41.959614 5915 utils.go:69] The recommended value for \"resolvConf\" in \"KubeletConfiguration\" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf",
"stderr_lines": [
"W0128 17:18:41.959614 5915 utils.go:69] The recommended value for \"resolvConf\" in \"KubeletConfiguration\" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf"
],
"stdout": "[preflight] Running pre-flight checks\n[preflight] Reading configuration from the cluster...\n[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Starting the kubelet\n[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...\n\nThis node has joined the cluster:\n* Certificate signing request was sent to apiserver and a response was received.\n* The Kubelet was informed of the new secure connection details.\n\nRun 'kubectl get nodes' on the control-plane to see this node join the cluster.",
"stdout_lines": [
"[preflight] Running pre-flight checks",
"[preflight] Reading configuration from the cluster...",
"[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'",
"[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"",
"[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"",
"[kubelet-start] Starting the kubelet",
"[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...",
"",
"This node has joined the cluster:",
"* Certificate signing request was sent to apiserver and a response was received.",
"* The Kubelet was informed of the new secure connection details.",
"",
"Run 'kubectl get nodes' on the control-plane to see this node join the cluster."
]
}
}
PLAY RECAP ***************************************************************************************************************************************************
fedkubenode : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
If everything goes fine you should be able to get worker node joined and see both the nodes listed and in ready state with below command.
[admin@fedkubemaster ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
fedkubemaster Ready control-plane,master 14h v1.23.3
fedkubenode Ready <none> 15m v1.23.3
If you see that your nodes are Not Ready, even though you have installed the flannel networking policy. Try to check the kubelet service logs and see if there are any initialisation error present. In my case i was facing the below issue where it was unable to validate the CNI config list and throwing the error of unable to find the portmap plugin. Here is the below error.
Error
[admin@fedkubemaster ~]$ journalctl --unit=kubelet | less
...
Jan 28 17:06:18 fedkubemaster kubelet[5027]: I0128 17:06:18.438384 5027 cni.go:205] "Error validating CNI config list" configList="{\n \"name\": \"cbr0\",\n \"cniVersion\": \"0.3.1\",\n \"plugins\": [\n {\n \"type\": \"flannel\",\n \"delegate\": {\n \"hairpinMode\": true,\n \"isDefaultGateway\": true\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\n \"portMappings\": true\n }\n }\n ]\n}\n" err="[failed to find plugin \"portmap\" in path [/opt/cni/bin]]"
Jan 28 17:06:18 fedkubemaster kubelet[5027]: I0128 17:06:18.438434 5027 cni.go:240] "Unable to update cni config" err="no valid networks found in /etc/cni/net.d"
Jan 28 17:06:20 fedkubemaster kubelet[5027]: E0128 17:06:20.862561 5027 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
...
After a bit of search, i could found that flannel CNI plugin has been moved to its own repository. Here is the release notes link. So, it seems while we were installing the flannel networking policy it was unable to install the dependent plugins like portmap on which it was failing. So i tried to download the CNI plugin relase as shown below and extracted all the plugins that are provided by the CNI plugin and copied to /opt/cni/bin which is required for the flannel network to work. Here are the details.
Resolution
We need to download the CNI plugins package, extract it and copy all the plugin to /opt/cni/bin on each of the managed nodes (ie. fedkubemaster and fedkubenode).
[admin@fedkubemaster ~]$ wget https://github.com/containernetworking/plugins/releases/download/v1.0.1/cni-plugins-linux-amd64-v1.0.1.tgz
[admin@fedkubemaster ~]$ sudo tar -xvf cni-plugins-linux-amd64-v1.0.1.tgz -C /opt/cni/bin/
[admin@fedkubenode ~]$ wget https://github.com/containernetworking/plugins/releases/download/v1.0.1/cni-plugins-linux-amd64-v1.0.1.tgz
[admin@fedkubenode ~]$ sudo tar -xvf cni-plugins-linux-amd64-v1.0.1.tgz -C /opt/cni/bin/
Now, you should see that the coredns pods get into running state from pending state and also the nodes should be showing in the Ready state. Here is the output of all the pods in the cluster.
[admin@fedkubemaster ~]$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-64897985d-8skq2 1/1 Running 0 16h
kube-system coredns-64897985d-qh9bq 1/1 Running 0 16h
kube-system etcd-fedkubemaster 1/1 Running 4 (133m ago) 16h
kube-system kube-apiserver-fedkubemaster 1/1 Running 4 (133m ago) 16h
kube-system kube-controller-manager-fedkubemaster 1/1 Running 4 (133m ago) 16h
kube-system kube-flannel-ds-slrsn 1/1 Running 0 122m
kube-system kube-flannel-ds-vstff 1/1 Running 1 (133m ago) 16h
kube-system kube-proxy-dht69 1/1 Running 0 122m
kube-system kube-proxy-vpc7k 1/1 Running 1 (133m ago) 16h
kube-system kube-scheduler-fedkubemaster 1/1 Running 4 (133m ago) 16h
NOTE: kubectl cli is configured on the fedkubemaster node but you can configure it locally on your machine too.
Hope you enjoyed reading this article. Thank you..
Leave a Reply
You must be logged in to post a comment.