How to use NetworkPolicies in Kubernetes Cluster
Here in this article we will see how we can implement Network Policies to regulate or filter the traffic for Pod to Pod Communication in the Kubernetes Cluster. We will try simulate it using two pods in different namespaces by applying network policies in those namespaces and identify how the pod to pod communication behave.
Test Environment
Fedora 36 server
Kubernetes Cluster v1.25.2 (1 master + 1 worker node) with Cilium Networking Solution
What are Network Policies
Network Policies help in filtering the traffic at OSI layer 3 and 4. Network Policies are namespaced resources and are applied to the pods running in the Kubernetes cluster. They can be used to regulate the traffic between pods in the the Kubernetes cluster.
In order to use Network Policies, the Kubernetes Cluster should a Networking Solution installed that supports Network Policies (eg. Cilium).
There are two types of rules that we can define while defining the Network Policies. They are Ingress and Egress. Egress rules define the outbound connection rules. Ingress rules define the inbound connection rules. By default, a pod is non-isolated for egress; all outbound connections are allowed and a pod is non-isolated for ingress; all inbound connections are allowed.
For a connection to be established between source pod and destination pod both the egress policy on the source pod and the ingress policy on the destination pod need to allow the connection. If either side does not allow the connection, it will not happen.
If you are interested in watching the video. Here is the YouTube video on the same step by step procedure outlined below.
Procedure
Step1: Create two namespaces for frontend and backend pods
Network Policies are namespaces. Let’s create two namespaces to accomodate our dummy frontend application pods and backend database pods.
[admin@kubemaster networkpolicies]$ kubectl create ns frontend
namespace/frontend created
[admin@kubemaster networkpolicies]$ kubectl create ns backend
namespace/backend created
Step2: Create Pods in frontend and backend namespace and Expose as Service
Now, let’s create our dummy frontend application pod in namespace frontend and backend database pod in namespace backend. Also we will expose these pods as service on port 80 as shown below.
[admin@kubemaster networkpolicies]$ kubectl -n frontend run app --image=nginx
[admin@kubemaster networkpolicies]$ kubectl -n backend run db --image=nginx
[admin@kubemaster networkpolicies]$ kubectl -n frontend expose pod app --port=80
[admin@kubemaster networkpolicies]$ kubectl -n backend expose pod db --port=80
Ensure that you pods and services are available as shown below.
[admin@kubemaster networkpolicies]$ kubectl -n frontend get pods,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/app 1/1 Running 0 101s 10.0.1.175 kubenode <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/app ClusterIP 10.103.11.246 <none> 80/TCP 40s run=app
[admin@kubemaster networkpolicies]$ kubectl -n backend get pods,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/db 1/1 Running 0 2m27s 10.0.1.145 kubenode <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/db ClusterIP 10.110.168.92 <none> 80/TCP 90s run=db
Step3: Validate Pod to Pod Communication without Network Policies
In this step we will try to connect to backend pod service from frontend pod and frontend pod service from backend pod as shown below. As you can see without the Network Policies implementation all the pods are free to communication with all the other pods without any restrictions.
[admin@kubemaster networkpolicies]$ kubectl -n frontend exec app -- curl -I db.backend.svc.cluster.local
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 615 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
HTTP/1.1 200 OK
...
[admin@kubemaster networkpolicies]$ kubectl -n backend exec db -- curl -I app.frontend.svc.cluster.local
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 615 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
HTTP/1.1 200 OK
...
Step4: Implement Default Deny Ingress and Egress Traffic Policy on both namespace
Here in this step we are going to apply the default deny policy on both the frontend and backend namespaces. These policies will restrict all the inbound and outbound connections for the pods within those namespaces.
Default Deny Policy for Frontend
[admin@kubemaster networkpolicies]$ cat frontend_defaultdeny.yml
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontenddefaultdeny
namespace: frontend
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
[admin@kubemaster networkpolicies]$ kubectl create -f frontend_defaultdeny.yml
networkpolicy.networking.k8s.io/frontenddefaultdeny created
Default Deny Policy for Backend
[admin@kubemaster networkpolicies]$ cat backend_defaultdeny.yml
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backenddefaultdeny
namespace: backend
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
[admin@kubemaster networkpolicies]$ kubectl create -f backend_defaultdeny.yml
networkpolicy.networking.k8s.io/backenddefaultdeny created
Step5: Validate Pod to Pod Communication with Default Deny Network Policy
After we have applied the default deny policy in both the namespace, we can see that all the inbound and outbound traffic are blocked now and we cannot connect from frontend to backend pods or backend to frontend pods as shown below.
[admin@kubemaster networkpolicies]$ kubectl -n frontend exec app -- curl -I db.backend.svc.cluster.local
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:20 --:--:-- 0
curl: (6) Could not resolve host: db.backend.svc.cluster.local command terminated with exit code 6
[admin@kubemaster networkpolicies]$ kubectl -n backend exec db -- curl -I app.frontend.svc.cluster.local
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:20 --:--:-- 0
curl: (6) Could not resolve host: app.frontend.svc.cluster.local command terminated with exit code 6
Step6: Allow Egress traffic from frontend to backend pod
Here in this step we are going the implement network policy in frontend namespace to identify all the pods and allow them to send egress traffic to backend pods.
As we have created our frontend and backend pods using the “kubectl run” command. Each of the pod is assigned a label with key as run as value as pod name. So here we are going to identify our pods to which we want to apply our network policy using the associated labels for it (ie. run: app).
Once the necessary pods are identified we will now need to define our egress rules in which we are filtering all the pods within namespace backend using the namespace selector to allow the egress traffic to be sent as shown below in the yaml definition file.
[admin@kubemaster networkpolicies]$ cat frontend_egress.yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontendegress
namespace: frontend
spec:
podSelector:
matchLabels:
run: app
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: backend
[admin@kubemaster networkpolicies]$ kubectl create -f frontend_egress.yml
networkpolicy.networking.k8s.io/frontendegress created
Step7: Allow Ingress traffic from frontend to backend pod
In our previous step we have allowed the egress traffic from frontend. But for the communication to be successful we also need to apply network policy on our backend pods such that they allow ingress traffic from those specific backend pods.
For this, we are going to create a network policy for our backend pods by identifing them using the labels (ie. run: db) and define our ingress rules in such a way that it allow the inbound connections (ie. ingress traffic) from pods in namespace frontend using the namespace selector as shown below.
[admin@kubemaster networkpolicies]$ cat backend_ingress.yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backendingress
namespace: backend
spec:
podSelector:
matchLabels:
run: db
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: frontend
[admin@kubemaster networkpolicies]$ kubectl create -f backend_ingress.yml
networkpolicy.networking.k8s.io/backendingress created
Step8: Validate frontend to backend pod communication
Now, let’s try to connect from frontend pod to backend pod service using the FQDN of the service. As you can see we are unable to reach the backend service as its unable to resolve the FQDN.
[admin@kubemaster networkpolicies]$ kubectl -n frontend exec app -- curl -I db.backend.svc.cluster.local
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:19 --:--:-- 0
curl: (6) Could not resolve host: db.backend.svc.cluster.local
Let’s try to connect from frontend pod to backend pod service using the ClusterIP of the backend service and now we should be able to connect to the backend pod as shown below.
[admin@kubemaster networkpolicies]$ k -n frontend exec app -- curl -I 10.110.168.92
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 615 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
HTTP/1.1 200 OK
Step9: Amend Default Deny Policy to allow DNS traffic on port 53
Here we are going to update our defaultdeny policy which we applied in the frontend namespace in such a way that allows for FQDN resolution using the DNS query on port 53. We need to allow egress traffic on port 53 to be allowed for this to function.
First, let’s delete our frontenddefaultdeny policy in namespace as shown below.
[admin@kubemaster networkpolicies]$ k -n frontend delete netpol frontenddefaultdeny
networkpolicy.networking.k8s.io "frontenddefaultdeny" deleted
Now, update our frontenddefaultdeny policy as shown below and apply the changes.
[admin@kubemaster networkpolicies]$ cat frontend_defaultdeny.yml
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontenddefaultdeny
namespace: frontend
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
egress:
- to:
ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
[admin@kubemaster networkpolicies]$ kubectl create -f frontend_defaultdeny.yml
networkpolicy.networking.k8s.io/frontenddefaultdeny created
Step10: Validate frontend to backend pod communication using FQDN
Now we can validate that our frontend pod is able to communicate with the backend service using the FQDN as shown below.
[admin@kubemaster networkpolicies]$ kubectl -n frontend exec app -- curl -I db.backend.svc.cluster.local
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 615 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
HTTP/1.1 200 OK
Hope you enjoyed reading this article. Thank you..
Leave a Reply
You must be logged in to post a comment.