How to setup kubernetes cluster using kind

Here in this article we will try to setup a local development environment kubernetes cluster using kind.
Test Environment
- Fedora 41 server
- go1.23.10
- Docker version 27.5.1
What is Kind
kind also known as kubernetes in docker is a tool that we can use to setup a local kubernetes cluster environment for development purpose. This cluster consist of nodes as docker containers. It offers extremely fast startup times and cluster creation.
Advantages
- Speed: Starts up extremely quickly as it runs Kubernetes nodes as Docker containers.
- Lightweight: Uses fewer resources (CPU, memory) because it avoids the overhead of virtual machines.
- Multi-Node: Easily supports multi-master and multi-worker configurations by starting multiple Docker containers.
- CI/CD Focus: Its speed and lightweight nature make it perfect for continuous integration and continuous deployment pipelines.
Disadvantages
- Docker-Centric: Relies entirely on Docker, limiting testing of other container technologies or VM-based environments.
- Docker in Docker: The cluster runs inside Docker, which can add a layer of complexity for network management.
If you are interested in watching the video. Here is the YouTube video on the same step by step procedure outlined below.
Procedure
Step1: Ensure Go and Docker installed
As a first step ensure that you have Go and Docker installed on your machine. Also ensure that docker service is up and running.
admin@linuxser:~$ go version
go version go1.23.10 linux/amd64
admin@linuxser:~$ docker --version
Docker version 27.5.1, build 9f9e405
admin@linuxser:~$ sudo systemctl start docker.service
Step2: Install kind
Here let’s install kind using go as shown below.
admin@linuxser:~$ go install sigs.k8s.io/kind@v0.30.0
go: downloading sigs.k8s.io/kind v0.30.0
go: downloading al.essio.dev/pkg/shellescape v1.5.1
go: downloading github.com/spf13/pflag v1.0.5
go: downloading github.com/spf13/cobra v1.8.0
go: downloading github.com/pkg/errors v0.9.1
go: downloading golang.org/x/sys v0.6.0
go: downloading github.com/pelletier/go-toml v1.9.5
go: downloading github.com/BurntSushi/toml v1.4.0
go: downloading github.com/evanphx/json-patch/v5 v5.6.0
go: downloading go.yaml.in/yaml/v3 v3.0.4
go: downloading sigs.k8s.io/yaml v1.4.0
Step3: Create a three node Kubernetes cluster
Let’s now create a kubernetes cluster configuration file that we will use setup a three node cluster with 1 master and 2 worker nodes as shown below.
admin@linuxser:~$ cat config.yaml
# three node (two workers) cluster config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
Now’s let’s create the kubernetes cluster using king with the configuration file that was created.
admin@linuxser:~$ kind create cluster --name k8dev --config config.yaml
Creating cluster "k8dev" ...
â Ensuring node image (kindest/node:v1.34.0) đŧ
â Preparing nodes đĻ đĻ đĻ
â Writing configuration đ
â Starting control-plane đšī¸
â Installing CNI đ
â Installing StorageClass đž
â Joining worker nodes đ
Set kubectl context to "kind-k8dev"
You can now use your cluster with:
kubectl cluster-info --context kind-k8dev
Have a nice day! đ
Step4: Verify k8dev cluster context
Once the kubernetes cluster installation is completed, you can verify the cluster context that has been created as shown below.
admin@linuxser:~$ kubectl cluster-info --context kind-k8dev
Kubernetes control plane is running at https://127.0.0.1:38383
CoreDNS is running at https://127.0.0.1:38383/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Step5: Verify cluster
Let’s now verify the status all the nodes and pods that are created as a part of cluster creation.
admin@linuxser:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8dev-control-plane Ready control-plane 58s v1.34.0
k8dev-worker Ready <none> 47s v1.34.0
k8dev-worker2 Ready <none> 47s v1.34.0
admin@linuxser:~$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-66bc5c9577-bhj4s 1/1 Running 0 46s
kube-system coredns-66bc5c9577-nq5sk 1/1 Running 0 46s
kube-system etcd-k8dev-control-plane 1/1 Running 0 51s
kube-system kindnet-qlfk7 1/1 Running 0 46s
kube-system kindnet-spf8m 1/1 Running 0 43s
kube-system kindnet-xsrts 1/1 Running 0 43s
kube-system kube-apiserver-k8dev-control-plane 1/1 Running 0 51s
kube-system kube-controller-manager-k8dev-control-plane 1/1 Running 0 51s
kube-system kube-proxy-2w5x5 1/1 Running 0 43s
kube-system kube-proxy-lqk6h 1/1 Running 0 43s
kube-system kube-proxy-nc847 1/1 Running 0 46s
kube-system kube-scheduler-k8dev-control-plane 1/1 Running 0 51s
local-path-storage local-path-provisioner-7b8c8ddbd6-2bfdk 1/1 Running 0 46s
Step6: Create a deployment
Here let’s create a deployment with image named “kubernetes-bootcamp”.
admin@linuxser:~$ kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
deployment.apps/nginx created
Step7: Verify a deployment
Verify that the deployment has instantiated the pod for that deployment.
admin@linuxscratch:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kubernetes-bootcamp-658f6cbd58-rpzsw 1/1 Running 1 (75s ago) 35h
Step8: Expose Deployment as Service
Now let’s expose this deployment as a service of type “NodePort” as shown below and capture the node port value on which its exposed.
In my case it was “31084”.
admin@linuxscratch:~$ kubectl expose deployment kubernetes-bootcamp --type=NodePort --port=8080
admin@linuxscratch:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 36h
kubernetes-bootcamp NodePort 10.96.8.41 <none> 8080:31084/TCP 35h
Step9: Validate Service
As we have exposed the service on “NodePort”, we need to get the IP address of the worker node as well.
admin@linuxscratch:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8dev-control-plane Ready control-plane 36h v1.34.0 172.18.0.4 <none> Debian GNU/Linux 12 (bookworm) 6.16.7-100.fc41.x86_64 containerd://2.1.3
k8dev-worker Ready <none> 36h v1.34.0 172.18.0.3 <none> Debian GNU/Linux 12 (bookworm) 6.16.7-100.fc41.x86_64 containerd://2.1.3
k8dev-worker2 Ready <none> 36h v1.34.0 172.18.0.2 <none> Debian GNU/Linux 12 (bookworm) 6.16.7-100.fc41.x86_64 containerd://2.1.3
Let’s now access our service using the worker node ip address and node port as shown below.
admin@linuxscratch:~$ curl http://172.18.0.2:31084/
Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-658f6cbd58-rpzsw | v=1
We can also verify the logs of the pod to check our request that was received by the pod as shown below.
admin@linuxscratch:~$ kubectl logs kubernetes-bootcamp-658f6cbd58-rpzsw
Kubernetes Bootcamp App Started At: 2025-09-24T02:07:59.301Z | Running On: kubernetes-bootcamp-658f6cbd58-rpzsw
Running On: kubernetes-bootcamp-658f6cbd58-rpzsw | Total Requests: 1 | App Uptime: 249.133 seconds | Log Time: 2025-09-24T02:12:08.434Z
Hope you enjoyed reading this article. Thank you..
Leave a Reply
You must be logged in to post a comment.