How to implement different Kubernetes Authentication Strategies

How to implement different Kubernetes Authentication Strategies

kubernetes_authentication_strategies

Here in this article we will try to elaborate on the various authentication modules or strategies that can be utilized in Kubernetes cluster for authenticating the Normal or Service Account users.

Test Environment

Fedora 36 server
Kubernetes Cluster v1.22.5 (1 Master + 1 Worker Node)

There are two types of clients that access the kubernetes cluster.

  • Normal Users
  • Service Accounts

Normal Users

Kubernetes does not have objects which represent normal user accounts. Normal users cannot be added to a cluster through an API call. These users are managed by a external or cluster independent service like Client certificates, Google Account stores or a Flat file with credentials.

Service Accounts

Service Accounts are users managed by the Kubernetes API. They are bound to specific namespaces, and created automatically by the API server or manually through API calls. Service accounts are tied to a set of credentials stored as Secrets, which are mounted into pods allowing in-cluster processes to talk to the Kubernetes API.

Any user that can present a valid certificate signed by cluster’s certificate authority (CA) is considered authenticated. Kubernetes determines the username from the client certificates subject field (ie. “/CN=alice”)).

API Request can be either from a Normal User or Service Account. For any other request they are considered to be an Anonymous Requests.

Authentication Strategies

Kubernetes uses client certificates, bearer tokens, or an authenticating proxy to authenticate API requests through authentication plugins. We can enable multiple authentication modules to use at once, atleast two methods should be enabled for the below.

  • service account tokens for service accounts
  • at least one other method for user authentication

If you are interested in watching the video, Here is the YouTube video on the same step by step procedure outlined below.

X509 Client Certs

Step1: Generate Certificate key pair for Normal User

Here is a basic script which will help in generating client certificate for a normal user named – “myuser”.

#!/bin/bash

user="myuser"
organization="Stack"
organizationunit="Stack"
locality="Mumbai"
state="MH"
country="IN"

#Generate Certificate key pair for Normal User
openssl req -new -newkey rsa:2048 -nodes -out $user.csr -keyout $user.key -subj "/C=$country/ST=$state/L=$locality/O=$organization/OU=$organizationunit/CN=$user"

#Base64 encode CSR
encodedCSR=`cat $user.csr | base64 | tr -d "\n"`
echo $encodedCSR

# Create CSR request
cat << EOF >> $user.yml
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: $user
spec:
  request: $encodedCSR
  signerName: kubernetes.io/kube-apiserver-client
  expirationSeconds: 86400  # one day
  usages:
  - client auth
EOF

kubectl apply -f $user.yml

# List CSR request
kubectl get csr

# Approve CSR request
kubectl certificate approve $user

# List CSR request
kubectl get csr

# Extract User Certificate
kubectl get csr $user -o jsonpath='{.status.certificate}'| base64 -d > $user.crt

# Check cert
openssl x509 -noout -modulus -in $user.crt | openssl md5

# Check key
openssl rsa -noout -modulus -in myuser.key | openssl md5

Step2: Retrieve the Kubernetes CA certificate

We need to retrieve the CA certificate that is used in the kubernetes cluster. For this we can look at the kube-apiserver.yml definition file to identify the path for the CA certificate and copy it into our location where the above user related client certificates are generated.

[root@kubemaster manifests]# grep -iR "client-ca-file" kube-apiserver.yaml 
    - --client-ca-file=/etc/kubernetes/pki/ca.crt

Step3: Connect to the API Server with Client Certificate

Let’s now try to connect to the API server using the myuser private key, public certificate and ca certificate of the kubernetes cluster as shown below.

[admin@kubemaster clientcertsdemo]$ curl -X GET https://192.168.122.45:6443/ --cert ./myuser.crt --key ./myuser.key --cacert ./ca.crt
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "forbidden: User \"myuser\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {},
  "code": 403
}

Static Token File

Step1: Create a Static File

In this authentication strategy we need to create a static file which consist of the user details as shown below.

[admin@kubemaster statictokendemo]$ cat statictokens.csv 
test1234,statictokenuser,5000,"stack,devops,admin"

Now let’s copy this csv file to /etc/kubernetes location.

[admin@kubemaster statictokendemo]$ sudo cp statictokens.csv /etc/kubernetes/
[admin@kubemaster statictokendemo]$ ls -ltr /etc/kubernetes/statictokens.csv 
-rw-r--r--. 1 root root 51 Oct 19 10:50 /etc/kubernetes/statictokens.csv

Step2: Update kube-apiserver manifest

We need to update our kube-apiserver manifest by mounting the path where the csv file is copied as volume mount and update the “–token-auth-file” with the location of the csv file as shown below.

[root@kubemaster manifests]# cat kube-apiserver.yaml 
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.122.45:6443
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.122.45
...
    - --token-auth-file=/etc/kubernetes/statictokens.csv
...
    volumeMounts:
    - mountPath: /etc/kubernetes/statictokens.csv
      name: statictokens
      readOnly: true
...
  volumes:
  - hostPath:
      path: /etc/kubernetes/statictokens.csv
      type: File
    name: statictokens

Step3: Connect to the API Server with Static Bearer Token

Now, once the kubernetes cluster is in ready state, we can try to access the kubernetes cluster API server using the “Authorization: Bearer” header with the token value as shown below.

[admin@kubemaster statictokendemo]$ curl -X GET -H "Authorization: Bearer test1234" https://192.168.122.45:6443/ --cacert ./ca.crt
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "forbidden: User \"statictokenuser\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {},
  "code": 403
}

Bootstrap Tokens

Bootstrap tokens are a special kind of bearer tokens which are dynamically created and used while provisioning the kubernetes cluster using kubeadm tool. These tokens are stored as secrets in kube-system namespace where they are managed by the Controller Manager. There are two components that we need to enable in the kubernetes manifest files.

The authenticator authenticates as system:bootstrap:. It is included in the system:bootstrappers group.

Bootstrap Token Authenticator need to be enabled in kube-apiserver.yml (ie. –enable-bootstrap-token-auth). This ensures that kubernetes cluster is able to use the dynamically generated tokens as authorization bearer token for authentication.

TokenCleaner need to be enabled in kube-controller-manager.yaml (ie. –controllers=*,bootstrapsigner,tokencleaner). This ensures that the kubernetes controller manager is able to manage and clean up expired bootstrap token from usage.

By default kubeadm will enable these two components for us if we are using it to bootstrap a cluster as shown below.

[root@kubemaster manifests]# pwd
/etc/kubernetes/manifests
[root@kubemaster manifests]# grep -iR "bootstrap" *
kube-apiserver.yaml:    - --enable-bootstrap-token-auth=true
[root@kubemaster manifests]# grep -iR "cleaner" *
kube-controller-manager.yaml:    - --controllers=*,bootstrapsigner,tokencleaner

For more detailed information, please follow Authenticating with Bootstrap Tokens.

Service Account Tokens

Server Account Tokens are also a special kind of bearer tokens which are created automatically by the API server associated with pods running in the cluster through the ServiceAccount Admission Controller. These tokens are mounted into the pods at a specific path and allow for in cluster process to talk to API server.

Server Account can also be manually created and associated with the pods using the erviceAccountName field of a PodSpec.

Let’s try to create a new pod to explore on the service account token as shown below and try to access API server from within the pod using the default service account token.

[admin@kubemaster serviceaccounttokens]$ kubectl run nginx --image=nginx
pod/nginx created

Now, once the pod is running we can describe its details as shown below.

[admin@kubemaster serviceaccounttokens]$ kubectl get pod nginx -o yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2022-10-19T06:16:35Z"
  labels:
    run: nginx
  name: nginx
  namespace: default
  resourceVersion: "522710"
  uid: 61352599-c70d-42b5-8dd3-98f3922663e6
spec:
  containers:
  - image: nginx
    imagePullPolicy: Always
    name: nginx
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-6hfb8
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: kubenode
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: kube-api-access-6hfb8
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2022-10-19T06:16:35Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2022-10-19T06:16:54Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2022-10-19T06:16:54Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2022-10-19T06:16:35Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: cri-o://3b8163d75658e3cc34c64059876e262a6af811cbb0ecbe29aa0b225482a65c07
    image: docker.io/library/nginx:latest
    imageID: docker.io/library/nginx@sha256:2f770d2fe27bc85f68fd7fe6a63900ef7076bc703022fe81b980377fe3d27b70
    lastState: {}
    name: nginx
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2022-10-19T06:16:53Z"
  hostIP: 192.168.122.49
  phase: Running
  podIP: 10.0.1.82
  podIPs:
  - ip: 10.0.1.82
  qosClass: BestEffort
  startTime: "2022-10-19T06:16:35Z"

Now let’s exec into the pod to capture the default service account token value as shown below.

[admin@kubemaster serviceaccounttokens]$ kubectl exec -it nginx -- bash
root@nginx:/var/run/secrets/kubernetes.io/serviceaccount# ls -ltr
total 0
lrwxrwxrwx. 1 root root 12 Oct 19 06:16 token -> ..data/token
lrwxrwxrwx. 1 root root 16 Oct 19 06:16 namespace -> ..data/namespace
lrwxrwxrwx. 1 root root 13 Oct 19 06:16 ca.crt -> ..data/ca.crt

root@nginx:/var/run/secrets/kubernetes.io/serviceaccount# cat token 
eyJhbGciOiJSUzI1NiIsImtpZCI6IjFOckw3anZrY1FLM1BOWGdpV2dvVzF0dzNPWjZmSS1WUmlYUnFoWkhHSTAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjk3Njk2MTk2LCJpYXQiOjE2NjYxNjAxOTYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0IiwicG9kIjp7Im5hbWUiOiJuZ2lueCIsInVpZCI6IjYxMzUyNTk5LWM3MGQtNDJiNS04ZGQzLTk4ZjM5MjI2NjNlNiJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZGVmYXVsdCIsInVpZCI6IjU4MmQzNzIwLTY1ZDAtNGIwOC04ZjE2LWIyMTJhODg4MDY0ZiJ9LCJ3YXJuYWZ0ZXIiOjE2NjYxNjM4MDN9LCJuYmYiOjE2NjYxNjAxOTYsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.pfuGKgjOQz6WeBkYsA4tvo1AvD_Sw1iN0AkhlrgVpFwL69dXo6NQpbICMssrYWz1ztPwkr_ZlzaPqgD4ZlTTNKH1kcY4rPgDWFflApAbMz7pIfZX5A7E32nKp0TPEGcbX_8m_i1qy-6yL90Du_vfPRQg91k_Eb68BEGHDw5wVficmCphT50Y-VleQqLWRXkrbNol-oEeVwCYPMeN5o-uDlELlFqvTtXF37Gbsu4b-LlvoluX2sDtwxwf88qAYydvJLC_jV41C8QTS76yTb6sjhtO37adSeDQXwyxczq3FE6cTyt8w9K78z9S5XLgvc0Kcc55pVcNYDs1cEaSxhzS_w

root@nginx:/var/run/secrets/kubernetes.io/serviceaccount# satoken=eyJhbGciOiJSUzI1NiIsImtpZCI6IjFOckw3anZrY1FLM1BOWGdpV2dvVzF0dzNPWjZmSS1WUmlYUnFoWkhHSTAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjk3Njk2MTk2LCJpYXQiOjE2NjYxNjAxOTYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0IiwicG9kIjp7Im5hbWUiOiJuZ2lueCIsInVpZCI6IjYxMzUyNTk5LWM3MGQtNDJiNS04ZGQzLTk4ZjM5MjI2NjNlNiJ9LCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZGVmYXVsdCIsInVpZCI6IjU4MmQzNzIwLTY1ZDAtNGIwOC04ZjE2LWIyMTJhODg4MDY0ZiJ9LCJ3YXJuYWZ0ZXIiOjE2NjYxNjM4MDN9LCJuYmYiOjE2NjYxNjAxOTYsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmRlZmF1bHQifQ.pfuGKgjOQz6WeBkYsA4tvo1AvD_Sw1iN0AkhlrgVpFwL69dXo6NQpbICMssrYWz1ztPwkr_ZlzaPqgD4ZlTTNKH1kcY4rPgDWFflApAbMz7pIfZX5A7E32nKp0TPEGcbX_8m_i1qy-6yL90Du_vfPRQg91k_Eb68BEGHDw5wVficmCphT50Y-VleQqLWRXkrbNol-oEeVwCYPMeN5o-uDlELlFqvTtXF37Gbsu4b-LlvoluX2sDtwxwf88qAYydvJLC_jV41C8QTS76yTb6sjhtO37adSeDQXwyxczq3FE6cTyt8w9K78z9S5XLgvc0Kcc55pVcNYDs1cEaSxhzS_w

Once the token details are capture we can now try to access the kubernetes cluster API server from within the pod as shown below.

root@nginx:/# APISERVER=https://kubernetes.default.svc
root@nginx:/# SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
root@nginx:/# NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
root@nginx:/# TOKEN=$(cat ${SERVICEACCOUNT}/token)
root@nginx:/# CACERT=${SERVICEACCOUNT}/ca.crt
root@nginx:/# curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "192.168.122.45:6443"
    }
  ]
}

OpenID Connect Tokens

For this authentication strategy please follow my blog How to authenticate user with Keycloak OIDC Provider in Kubernetes in which we will see how we can use keycloak OIDC provider as the authencation system for kubernetes cluster users.

Webhook Token Authentication

For information on this, please follow the documentation. Will try to include an example for this in future.

NOTE: We haven’t provided any permissions to the users that we used in this article yet so the Forbidden Error is a valid error. Once the users are assigned with appropritate roles we should be able to access the API’s details.

Hope you enjoyed reading this article. Thank you..