Security
Exercise 16.1: Working with TLS
Overview
We have learned that the flow of access to a cluster begins with TLS connectivity, then authentication followed by authorization, finally an admission control plug-in allows advanced features prior to the request being fulfilled. The use of Initializers allows the flexibility of a shell-script to dynamically modify the request. As security is an important, ongoing concern, there may be multiple configurations used depending on the needs of the cluster.
Every process making API requests to the cluster must authenticate or be treated as an anonymous user.
Working with TLS
While one can have multiple cluster root Certificate Authorities (CA) by default each cluster uses their own, intended for intracluster communication. The CA certificate bundle is distributed to each node and as a secret to default service accounts. The kubelet is a local agent which ensures local containers are running and healthy.
- View the kubelet on both the master and secondary nodes. The kube-apiserver also shows security information such
as certificates and authorization mode. As kubelet is a systemd service we will start looking at that output.
student@lfs458-node-1a0a:~$ systemctl status kubelet.service kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: en Drop-In: /etc/systemd/system/kubelet.service.d 10-kubeadm.conf <output_omitted> - If we look at the status output, and follow the cgroup information, which is a long line we where configuration settings
are drawn from, we see where the configuration file can be found.
CGroup: /system.slice/kubelet.service 19523 /usr/bin/kubelet .... --config=/var/lib/kubelet/config.yaml .. - Take a look at the settings in the
/var/lib/kubelet/config.yamlfile. Among other information we can see the/etc/kubernetes/pki/directory is used for accessing the kube-apiserver. Near the end of the output it also sets the directory to find other pod spec files.student@lfs458-node-1a0a:~$ sudo less /var/lib/kubelet/config.yaml address: 0.0.0.0 apiVersion: kubelet.config.k8s.io/v1beta1 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.crt -
Other agents on the master node interact with the kube-apiserver. View the configuration files where these settings
are made. This was set in the previous YAML file. Look at one of the files for cert information.
student@lfs458-node-1a0a:~$ sudo ls /etc/kubernetes/manifests/ etcd.yaml kube-controller-manager.yaml kube-apiserver.yaml kube-scheduler.yaml student@lfs458-node-1a0a:~$ sudo less \ /etc/kubernetes/manifests/kube-controller-manager.yaml <output_omitted> - The use of tokens has become central to authorizing component communication. The tokens are kept as secrets. Take
a look at the current secrets in the kube-system namespace.
student@lfs458-node-1a0a:~$ kubectl -n kube-system get secrets NAME TYPE DATA AGE attachdetach-controller-token-xqr8n kubernetes.io/service-account-token 3 5d bootstrap-signer-token-xbp6s kubernetes.io/service-account-token 3 5d bootstrap-token-i3r13t bootstrap.kubernetes.io/token 7 5d <output_omitted> - Take a closer look at one of the secrets and the token within. The certificate-controller-token could be one to
look at. The use of the Tab key can help with long names. Long lines have been truncated in the output below.
student@lfs458-node-1a0a:~$ kubectl -n kube-system get secrets \ certificate<Tab> -o yaml apiVersion: v1 data: ca.crt: LS0tLS1CRUdJTi..... namespace: a3ViZS1zeXN0ZW0= token: ZXlKaGJHY2lPaUpTVXpJM.... kind: Secret metadata: annotations: kubernetes.io/service-account.name: certificate-controller kubernetes.io/service-account.uid: 7dfa2aa0-9376-11e8-8cfb -42010a800002 creationTimestamp: 2018-07-29T21:29:36Z name: certificate-controller-token-wnrwh namespace: kube-system resourceVersion: "196" selfLink: /api/v1/namespaces/kube-system/secrets/certificate- controller-token-wnrwh uid: 7dfbb237-9376-11e8-8cfb-42010a800002 type: kubernetes.io/service-account-token - The kubectl config command can also be used to view and update parameters. When making updates this could avoid
a typo removing access to the cluster. View the current configuration settings. The keys and certs are redacted from the
output automatically
student@lfs458-node-1a0a:~$ kubectl config view apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED <output_omitted> - View the options, such as setting a password for the admin instead of a key. Read through the examples and options.
student@lfs458-node-1a0a:~$ kubectl config set-credentials -h Sets a user entry in kubeconfig <output_omitted> -
Make a copy of your access configuration file. Later steps will update this file and we can view the differences.
student@lfs458-node-1a0a:~$ cp ~/.kube/config ~/cluster-api-config -
Explore working with cluster and security configurations both using kubectl and kubeadm. Among other values, find the name of your cluster. You will need to become root to work with kubeadm.
student@lfs458-node-1a0a:~$ kubectl config <Tab><Tab> current-context get-contexts set-context view delete-cluster rename-context set-credentials delete-context set unset get-clusters set-cluster use-context student@lfs458-node-1a0a:~$ sudo -i root@lfs458-node-1a0a:~# kubeadm token -h <output_omitted> root@lfs458-node-1a0a:~# kubeadm config -h <output_omitted> - Review the cluster default configuration settings. At over 150 lines there may be some interesting tidbits to the security
and infrastructure of the cluster.
student@lfs458-node-1a0a:~$ kubeadm config print-default api: advertiseAddress: 10.128.0.2 bindPort: 6443 controlPlaneEndpoint: "" apiVersion: kubeadm.k8s.io/v1alpha2 auditPolicy: logDir: /var/log/kubernetes/audit logMaxAge: 2 path: "" bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef <output_omitted>
Exercise 16.2: Authentication and Authorization
Kubernetes clusters have to types of users service accounts and normal users, but normal users are assumed to be managed by an outside service. There are no objects to represent them and they cannot be added via an API call, but service accounts can be added.
We will use RBAC to configure access to actions within a namespace for a new contractor, Developer Dan who will be working on a new project.
- Create two namespaces, one for production and the other for development.
student@lfs458-node-1a0a:~$ kubectl create ns development namespace "development" created student@lfs458-node-1a0a:~$ kubectl create ns production namespace "production" created -
View the current clusters and context available. The context allows you to configure the cluster to use, namespace and
user for kubectl commands in an easy and consistent manner.
student@lfs458-node-1a0a:~$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * kubernetes-admin@kubernetes kubernetes kubernetes-admin -
Create a new user DevDan and assign a password of lfs458.
student@lfs458-node-1a0a:~$ sudo useradd -s /bin/bash DevDan student@lfs458-node-1a0a:~$ sudo passwd DevDan Enter new UNIX password: lfs458 Retype new UNIX password: lfs458 passwd: password updated successfully -
Generate a private key then Certificate Signing Request (CSR) for DevDan.
student@lfs458-node-1a0a:~$ openssl genrsa -out DevDan.key 2048 Generating RSA private key, 2048 bit long modulus ......+++ .........+++ e is 65537 (0x10001) student@lfs458-node-1a0a:~$ openssl req -new -key DevDan.key \ -out DevDan.csr -subj "/CN=DevDan/O=development" -
Using thew newly created request generate a self-signed certificate using the x509 protocol. Use the CA keys for the
Kubernetes cluster and set a 45 day expiration. You’ll need to use sudo to access to the inbound files.
student@lfs458-node-1a0a:~$ sudo openssl x509 -req -in DevDan.csr \ -CA /etc/kubernetes/pki/ca.crt \ -CAkey /etc/kubernetes/pki/ca.key \ -CAcreateserial \ -out DevDan.crt -days 45 Signature ok subject=/CN=DevDan/O=development Getting CA Private Key -
Update the access config file to reference the new key and certificate. Normally we would move them to a safe directory
instead of a non-root user’s home.
student@lfs458-node-1a0a:~$ kubectl config set-credentials DevDan \ --client-certificate=/home/student/DevDan.crt \ --client-key=/home/student/DevDan.key User "DevDan" set. -
View the update to your credentials file. Use diff to compare against the copy we made earlier.
student@lfs458-node-1a0a:~$ diff cluster-api-config .kube/config 9a10,14 > namespace: development > user: DevDan > name: DevDan-context > - context: > cluster: kubernetes 15a21,25 > - name: DevDan > user: > as-user-extra: {} > client-certificate: /home/student/DevDan.crt > client-key: /home/student/DevDan.key -
We will now create a context. For this we will need the name of the cluster, namespace and CN of the user we set or
saw in previous steps.
student@lfs458-node-1a0a:~$ kubectl config set-context DevDan-context \ --cluster=kubernetes \ --namespace=development \ --user=DevDan Context "DevDan-context" created. -
Attempt to view the Pods inside the DevDan-context. Be aware you will get an error.
student@lfs458-node-1a0a:~$ kubectl --context=DevDan-context get pods Error from server (Forbidden): pods is forbidden: User "DevDan" cannot list pods in the namespace "development" -
Verify the context has been properly set.
student@lfs458-node-1a0a:~$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE DevDan-context kubernetes DevDan development * kubernetes-admin@kubernetes kubernetes kubernetes-admin -
Again check the recent changes to the cluster access config file.
student@lfs458-node-1a0a:~$ diff cluster-api-config .kube/config 9a10,14 > namespace: development > user: DevDan > name: DevDan-context > - context: > cluster: kubernetes 15a21,25 > - name: DevDan > user: <output_omitted> -
We will now create a YAML file to associate RBAC rights to a particular namespace and Role.
student@lfs458-node-1a0a:~$ vim role-dev.yaml kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: namespace: development name: developer rules: - apiGroups: ["", "extensions", "apps"] resources: ["deployments", "replicasets", "pods"] verbs: ["list", "get", "watch", "create", "update", "patch", "delete"] # You can use ["*"] for all verbs -
Create the object. Check white space and for typos if you encounter errors.
student@lfs458-node-1a0a:~$ kubectl create -f role-dev.yaml role.rbac.authorization.k8s.io/developer created -
Now we create a RoleBinding to associate the Role we just created with a user. Create the object when the file has been created.
student@lfs458-node-1a0a:~$ vim rolebind.yaml kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: developer-role-binding namespace: development subjects: - kind: User name: DevDan apiGroup: "" roleRef: kind: Role name: developer apiGroup: "" student@lfs458-node-1a0a:~$ kubectl apply -f rolebind.yaml rolebinding.rbac.authorization.k8s.io/developer-role-binding created -
Test the context again. This time it should work. There are no Pods running so you should get a response of No resources found.
student@lfs458-node-1a0a:~$ kubectl --context=DevDan-context get pods No resources found. -
Create a new pod, verify it exists, then delete it.
student@lfs458-node-1a0a:~$ kubectl --context=DevDan-context \ create deployment nginx --image=nginx deployment.apps/nginx created student@lfs458-node-1a0a:~$ kubectl --context=DevDan-context get pods NAME READY STATUS RESTARTS AGE nginx-7c87f569d-7gb9k 1/1 Running 0 5s student@lfs458-node-1a0a:~$ kubectl --context=DevDan-context delete deploy nginx deployment.extensions "nginx" deleted -
We will now create a different context for production systems. The Role will only have the ability to view, but not create or delete resources. Begin by copying and editing the Role and RoleBindings YAML files.
student@lfs458-node-1a0a:~$ cp role-dev.yaml role-prod.yaml student@lfs458-node-1a0a:~$ vim role-prod.yaml kind: Role apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: namespace: production #<<- This line name: dev-prod #<<- and this line rules: - apiGroups: ["", "extensions", "apps"] resources: ["deployments", "replicasets", "pods"] verbs: ["get", "list", "watch"] #<<- and this one student@lfs458-node-1a0a:~$ cp rolebind.yaml rolebindprod.yaml student@lfs458-node-1a0a:~$ vim rolebindprod.yaml kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: production-role-binding namespace: production subjects: - kind: User name: DevDan apiGroup: "" roleRef: kind: Role name: dev-prod apiGroup: "" -
Create both new objects.
student@lfs458-node-1a0a:~$ kubectl apply -f role-prod.yaml role.rbac.authorization.k8s.io/dev-prod created student@lfs458-node-1a0a:~$ kubectl apply -f rolebindprod.yaml rolebinding.rbac.authorization.k8s.io/production-role-binding created -
Create the new context for production use.
student@lfs458-node-1a0a:~$ kubectl config set-context ProdDan-context \ --cluster=kubernetes \ --namespace=production \ --user=DevDan Context "ProdDan-context" created. -
Verify that user DevDan can view pods using the new context.
student@lfs458-node-1a0a:~$ kubectl --context=ProdDan-context get pods No resources found. -
Try to create a Pod in production. The developer should be Forbidden.
student@lfs458-node-1a0a:~$ kubectl --context=ProdDan-context run \ nginx --image=nginx Error from server (Forbidden): deployments.extensions is forbidden: User "DevDan" cannot \ create deployments.extensions in the namespace "production" -
View the details of a role.
student@lfs458-node-1a0a:~$ kubectl describe role dev-prod -n production Name: dev-prod Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration= {"apiVersion":"rbac.authorization.k8s.io/v1beta1","kind":"Role" ,"metadata":{"annotations":{},"name":"dev-prod","namespace": "production"},"rules":[{"api... PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- deployments [] [] [get list watch] deployments.apps [] [] [get list watch] <output_omitted> - Experiment with other subcommands in both contexts. They should match those listed in the respective roles.
Exercise 16.3: Admission Controllers
The last stop before a request is sent to the API server is an admission control plug-in. They interact with features such as setting parameters like a default storage class, checking resource quotas, or security settings. A newer feature (v1.7.x) is dynamic controllers which allow new controllers to be ingested or configured at runtime.
- View the current admission controller settings. Unlike earlier versions of Kubernetes the controllers are now compiled into the server, instead of being passed at run-time. Instead of a list of which controllers to use we can enable and disable specific plugins.
student@lfs458-node-1a0a:~$ sudo grep admission \ /etc/kubernetes/manifests/kube-apiserver.yaml - --disable-admission-plugins=PersistentVolumeLabel - --enable-admission-plugins=NodeRestriction
![]() |
