How to use OPA Gatekeeper to enforce Policies in Kubernetes
Here in this article we will see how we can setup Kubernetes Cluster to use OPA Gatekeeper as a Policy engine to enforce constraints on kubernetes resources. We will be installing OPA Gatekeeper in kubernetes cluster and create Custom Resource Definition for ConstraintTemplates and Constration to enforce Policy restrictions using the OPA Gatekeeper.
Test Environment
Fedora 36 server
kubernetes cluster v1.22.5 (1 master + 1 worker node)
What are Policies
Policies are a way for organization to enforce governance, legal requirements or best practices. Manually managing these policies within the organization and being compliant would be a complex task. These are tools which help in automating this process with the help of declarative way of writing policy files and providing decision making capabailities for enforcing the required policies based on the input provided.
What is OPA
Open Policy Agent is a open source, general purpose policy engine that unifies policy enforcement across the stack. We can define the policies using the high level declarative language. We can use OPA to enforce policies in microservices, kubernetes, cicd pipelines, api gateways and more. It is a CNCF graduated project. OPA decouples policy decision-making from policy enforcement. OPA generates policy decisions by evaluating the query input against policies and data.
What is OPA Gatekeeper
OPA Gatekeeper is a layer on top of kubernetes cluster which help in making the decoupling and making policy decisions. This Gatekeeper needs to be enabled in the Kubernetes Admission Controller webhooks layer through which are cluster requests would be validated againts this OPA Gatekeeper. Gatekeeper also provides with the audit functionality which allows the administrators to see what resources are currently violating any given policy.
Procedure
Step1: Install Open Policy Agent Gatekeeper
Ensure the user with which we are applying the OPA Gatekeeper has the cluster administration privilegges. Apply the followying yaml definition file to install OPA Gatekeeper.
[admin@kubemaster ~]$ kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml
namespace/gatekeeper-system created
resourcequota/gatekeeper-critical-pods created
customresourcedefinition.apiextensions.k8s.io/assign.mutations.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/assignmetadata.mutations.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/configs.config.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/constraintpodstatuses.status.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/constrainttemplatepodstatuses.status.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/constrainttemplates.templates.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/expansiontemplate.expansion.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/modifyset.mutations.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/mutatorpodstatuses.status.gatekeeper.sh created
customresourcedefinition.apiextensions.k8s.io/providers.externaldata.gatekeeper.sh created
serviceaccount/gatekeeper-admin created
role.rbac.authorization.k8s.io/gatekeeper-manager-role created
clusterrole.rbac.authorization.k8s.io/gatekeeper-manager-role created
rolebinding.rbac.authorization.k8s.io/gatekeeper-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/gatekeeper-manager-rolebinding created
secret/gatekeeper-webhook-server-cert created
service/gatekeeper-webhook-service created
deployment.apps/gatekeeper-audit created
deployment.apps/gatekeeper-controller-manager created
poddisruptionbudget.policy/gatekeeper-controller-manager created
mutatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-mutating-webhook-configuration created
validatingwebhookconfiguration.admissionregistration.k8s.io/gatekeeper-validating-webhook-configuration created
This is going to create a set of resources in a new namespace – gatekeeper-system.
[admin@kubemaster ~]$ kubectl get all -n gatekeeper-system
NAME READY STATUS RESTARTS AGE
pod/gatekeeper-audit-59688df57d-c9v97 1/1 Running 0 101s
pod/gatekeeper-controller-manager-6df4f6957c-4cg9d 1/1 Running 0 101s
pod/gatekeeper-controller-manager-6df4f6957c-j5qkr 1/1 Running 0 101s
pod/gatekeeper-controller-manager-6df4f6957c-r5dxc 1/1 Running 0 101s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gatekeeper-webhook-service ClusterIP 10.105.156.134 <none> 443/TCP 101s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/gatekeeper-audit 1/1 1 1 101s
deployment.apps/gatekeeper-controller-manager 3/3 3 3 101s
NAME DESIRED CURRENT READY AGE
replicaset.apps/gatekeeper-audit-59688df57d 1 1 1 101s
replicaset.apps/gatekeeper-controller-manager-6df4f6957c 3 3 3 101s
To uninstall all the gatekeeper components we can execute the below.
[admin@kubemaster ~]$ kubectl delete -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml
Step2: Create a Constraint template
Gatekeeper uses the OPA Constraint Framework to describe and enforce policy. ConstraintTemplate definitions define the rego policy that enforces and the constraint and schema of the constraint. The schema of the constraint allows an admin to fine-tune the behavior of a constraint, much like arguments to a function. It’s basically acts as an input to the rego policy defined in the targets.
[admin@kubemaster opa_gatekeeper]$ cat requiredlabelscontstrainttemplate.yml
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8srequiredlabels
spec:
crd:
spec:
names:
kind: K8sRequiredLabels
validation:
# Schema for the `parameters` field
openAPIV3Schema:
type: object
properties:
labels:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8srequiredlabels
violation[{"msg": msg, "details": {"missing_labels": missing}}] {
provided := {label | input.review.object.metadata.labels[label]}
required := {label | label := input.parameters.labels[_]}
missing := required - provided
count(missing) > 0
msg := sprintf("you must provide labels: %v", [missing])
}
[admin@kubemaster opa_gatekeeper]$ kubectl apply -f requiredlabelscontstrainttemplate.yml
constrainttemplate.templates.gatekeeper.sh/k8srequiredlabels created
[admin@kubemaster opa_gatekeeper]$ kubectl get constrainttemplate
NAME AGE
k8srequiredlabels 8s
Step3: Create a Constraint
Constraints are a way to instantiate the ConstraintTemplate. It basically informs the Gatekeeper that the admin wants a ConstraintTemplate to be enforced and how. This constraint uses the K8sRequiredLabels constraint template above to make sure the gatekeeper label is defined on all namespaces. The parameters field describes the intent of a constraint. It can be referenced as input.parameters by the ConstraintTemplate’s Rego source code. Gatekeeper populates input.parameters with values passed into the parameters field in the Constraint.
[admin@kubemaster opa_gatekeeper]$ cat requiredlabelsconstraint.yml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-gk
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels: ["gatekeeper"]
[admin@kubemaster opa_gatekeeper]$ kubectl apply -f requiredlabelsconstraint.yml
k8srequiredlabels.constraints.gatekeeper.sh/ns-must-have-gk created
[admin@kubemaster opa_gatekeeper]$ kubectl get constraints
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
ns-must-have-gk
Step4: Create a namespace without a label
In this step let’s now try to create a namespace without any label attached to it. This should fail as it violates the constraint that we defined which requires that a label named “gatekeeper” must be defined for every namespace object that we create.
[admin@kubemaster opa_gatekeeper]$ kubectl create ns test
Error from server (Forbidden): admission webhook "validation.gatekeeper.sh" denied the request: [ns-must-have-gk] you must provide labels: {"gatekeeper"}
Step5: Create a namespace with gatekeeper label
Now, let’s create a namespace with the “gatekeeper” label defined as shown in the below yaml definition file.
[admin@kubemaster opa_gatekeeper]$ cat namespacewithlabel.yml
apiVersion: v1
kind: Namespace
metadata:
creationTimestamp: null
name: test
labels:
gatekeeper: test
spec: {}
status: {}
Let’s apply this yaml definition file to create a namespace with label “gatekeeper” and it should allow it to be created as shown below.
[admin@kubemaster opa_gatekeeper]$ kubectl apply -f namespacewithlabel.yml
namespace/test created
[admin@kubemaster opa_gatekeeper]$ kubectl get ns test --show-labels
NAME STATUS AGE LABELS
test Active 36s gatekeeper=test,kubernetes.io/metadata.name=test
Step6: Check for constraint violations
We can also check the list of all the violations for the constraint that we defined by describing the constraint and checking for the violations under the status field of the object as shown below.
List the Constraints
[admin@kubemaster opa_gatekeeper]$ kubectl get constraints
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
ns-must-have-gk 6
Describe the Constraint
[admin@kubemaster opa_gatekeeper]$ kubectl describe constraint ns-must-have-gk
Name: ns-must-have-gk
Namespace:
Labels: <none>
Annotations: <none>
API Version: constraints.gatekeeper.sh/v1beta1
Kind: K8sRequiredLabels
Metadata:
Creation Timestamp: 2022-10-15T10:43:27Z
Generation: 1
Managed Fields:
API Version: constraints.gatekeeper.sh/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:match:
.:
f:kinds:
f:parameters:
.:
f:labels:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-10-15T10:43:27Z
API Version: constraints.gatekeeper.sh/v1beta1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:auditTimestamp:
f:byPod:
f:totalViolations:
f:violations:
Manager: gatekeeper
Operation: Update
Subresource: status
Time: 2022-10-15T10:51:52Z
Resource Version: 436201
UID: ad5f8299-40de-42d8-9ce6-863580958fcf
Spec:
Match:
Kinds:
API Groups:
Kinds:
Namespace
Parameters:
Labels:
gatekeeper
Status:
Audit Timestamp: 2022-10-15T10:51:50Z
By Pod:
Constraint UID: ad5f8299-40de-42d8-9ce6-863580958fcf
Enforced: true
Id: gatekeeper-audit-59688df57d-c9v97
Observed Generation: 1
Operations:
audit
mutation-status
status
Constraint UID: ad5f8299-40de-42d8-9ce6-863580958fcf
Enforced: true
Id: gatekeeper-controller-manager-6df4f6957c-4cg9d
Observed Generation: 1
Operations:
mutation-webhook
webhook
Constraint UID: ad5f8299-40de-42d8-9ce6-863580958fcf
Enforced: true
Id: gatekeeper-controller-manager-6df4f6957c-j5qkr
Observed Generation: 1
Operations:
mutation-webhook
webhook
Constraint UID: ad5f8299-40de-42d8-9ce6-863580958fcf
Enforced: true
Id: gatekeeper-controller-manager-6df4f6957c-r5dxc
Observed Generation: 1
Operations:
mutation-webhook
webhook
Total Violations: 6
Violations:
Enforcement Action: deny
Group:
Kind: Namespace
Message: you must provide labels: {"gatekeeper"}
Name: default
Version: v1
Enforcement Action: deny
Group:
Kind: Namespace
Message: you must provide labels: {"gatekeeper"}
Name: gatekeeper-system
Version: v1
Enforcement Action: deny
Group:
Kind: Namespace
Message: you must provide labels: {"gatekeeper"}
Name: kube-node-lease
Version: v1
Enforcement Action: deny
Group:
Kind: Namespace
Message: you must provide labels: {"gatekeeper"}
Name: kube-public
Version: v1
Enforcement Action: deny
Group:
Kind: Namespace
Message: you must provide labels: {"gatekeeper"}
Name: kube-system
Version: v1
Enforcement Action: deny
Group:
Kind: Namespace
Message: you must provide labels: {"gatekeeper"}
Name: workloads
Version: v1
Events: <none>
Hope you enjoyed reading this article. Thank you..
Leave a Reply
You must be logged in to post a comment.