Scheduling
Exercise 11.1: Assign Pods Using Labels
Overview
While allowing the system to distribute Pods on your behalf is typically the best route, you may want to determine which nodes a Pod will use. For example you may have particular hardware requirements to meet for the workload. You may want to assign VIP Pods to new, faster hardware and everyone else to older hardware.
In this exercise we will use labels to schedule Pods to a particular node. Then we will explore taints to have more flexible deployment in a large environment.
Assign Pods Using Labels
- Begin by getting a list of the nodes. They should be in the ready state and without added labels or taints.
student@lfs458-node-1a0a:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION lfs458-node-1a0a Ready master 2d v1.12.1 lfs458-worker Ready <none> 2d v1.12.1
- View the current labels and taints for the nodes.
student@lfs458-node-1a0a:~$ kubectl describe nodes |grep -i label Labels: beta.kubernetes.io/arch=amd64 Labels: beta.kubernetes.io/arch=amd64 student@lfs458-node-1a0a:~$ kubectl describe nodes |grep -i taint Taints: <none> Taints: <none>
- Verify there are no deployments running, outside of the kube-system namespace. If there are, delete them. Then get a count of how many containers are running on both the master and secondary nodes. There are about 24 containers running on the master in the following example, and eight running on the worker. There are status lines which increase the wc count. You may have more or less, depending on previous labs and cleaning up of resources.
student@lfs458-node-1a0a:~$ kubectl get deployments --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default secondapp 1 1 1 1 37m default thirdpage 1 1 1 1 14m kube-system calico-typha 0 0 0 0 2d15h kube-system coredns 2 2 2 2 2d15h low-usage-limit limited-hog 1 1 1 1 1d29m student@lfs458-node-1a0a:~$ sudo docker ps |wc -l 24 student@lfs458-worker:~$ sudo docker ps |wc -l 14
- For the purpose of the exercise we will assign the master node to be VIP hardware and the secondary node to be for
others.
student@lfs458-node-1a0a:~$ kubectl label nodes lfs458-node-1a0a status=vip node/lfs458-node-1a0a labeled student@lfs458-node-1a0a:~$ kubectl label nodes lfs458-worker status=other node/lfs458-worker labeled
- Verify your settings. You will also find there are some built in labels such as hostname, os and architecture type. The
output below appears on multiple lines for readability.
student@lfs458-node-1a0a:~$ kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS lfs458-node-1a0a Ready master 2d v1.12.1 beta.kubernetes.io/arch= amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=lfs458-node-1a0a, node-role.kubernetes.io/master=,status=vip lfs458-worker Ready <none> 2d v1.12.1 beta.kubernetes.io/arch= amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=lfs458-worker,status=other
- Create vip.yaml to spawn four busybox containers which sleep the whole time. Include the nodeSelector entry.
student@lfs458-node-1a0a:~$ vim vip.yaml apiVersion: v1 kind: Pod metadata: name: vip spec: containers: - name: vip1 image: busybox args: - sleep - "1000000" - name: vip2 image: busybox args: - sleep - "1000000" - name: vip3 image: busybox args: - sleep - "1000000" - name: vip4 image: busybox args: - sleep - "1000000" nodeSelector: status: vip
- Deploy the new pod. Verify the containers have been created on the master node. It may take a few seconds for all the
containers to spawn. Check both the master and the secondary nodes.
student@lfs458-node-1a0a:~$ kubectl create -f vip.yaml pod/vip created student@lfs458-node-1a0a:~$ sudo docker ps |wc -l 29 student@lfs458-worker:~$ sudo docker ps |wc -l 8
- Delete the pod then edit the file, commenting out the nodeSelector lines. It may take a while for the containers to fully
terminate.
student@lfs458-node-1a0a:~$ kubectl delete pod vip pod "vip" deleted student@lfs458-node-1a0a:~$ vim vip.yaml .... # nodeSelector: # status: vip
- Create the pod again. Containers should now be spawning on both nodes. You may see pods for the daemonsets as
well.
student@lfs458-node-1a0a:~$ kubectl get pods NAME READY STATUS RESTARTS AGE ds-one-bdqst 1/1 Running 0 145m ds-one-t2t7z 1/1 Running 0 158m secondapp-85765cd95c-2q9sx 1/1 Running 0 43m thirdpage-7c9b56bfdd-2q5pr 1/1 Running 0 20m student@lfs458-node-1a0a:~$ kubectl create -f vip.yaml pod/vip created
- Determine where the new containers have been deployed. They should be more evenly spread this time.
student@lfs458-node-1a0a:~$ sudo docker ps |wc -l 24 student@lfs458-worker:~$ sudo docker ps |wc -l 19
- Create another file for other users. Change the names from vip to others, and uncomment the nodeSelector lines.
student@lfs458-node-1a0a:~$ cp vip.yaml other.yaml student@lfs458-node-1a0a:~$ sed -i s/vip/other/g other.yaml student@lfs458-node-1a0a:~$ vim other.yaml . nodeSelector: status: other
- Create the other containers. Determine where they deploy.
student@lfs458-node-1a0a:~$ kubectl create -f other.yaml pod/other created student@lfs458-node-1a0a:~$ sudo docker ps |wc -l 24 student@lfs458-worker:~$ sudo docker ps |wc -l 24
- Shut down both pods and verify they terminated. Only our previous pods should be found.
student@lfs458-node-1a0a:~$ kubectl delete pods vip other pod "vip" deleted pod "other" deleted student@lfs458-node-1a0a:~$ kubectl get pods NAME READY STATUS RESTARTS AGE ds-one-bdqst 1/1 Running 0 153m ds-one-t2t7z 1/1 Running 0 166m secondapp-85765cd95c-2q9sx 1/1 Running 0 51m thirdpage-7c9b56bfdd-2q5pr 1/1 Running 0 28m
Exercise 11.2: Using Taints to Control Pod Deployment
Use taints to manage where Pods are deployed or allowed to run. In addition to assigning a Pod to a group of nodes, you may also want to limit usage on a node or fully evacuate Pods. Using taints is one way to achieve this. You may remember that the master node begins with a NoSchedule taint. We will work with three taints to limit or remove running pods.
- Verify that the master and secondary node have the minimal number of containers running.
student@lfs458-node-1a0a:~$ kubectl delete deployment secondapp \ thirdpage deployment.extensions "secondapp" deleted deployment.extensions "thirdpage" deleted
- Create a deployment which will deploy eight nginx containers. Begin by creating a YAML file.
student@lfs458-node-1a0a:~$ vim taint.yaml apiVersion: apps/v1beta1 kind: Deployment metadata: name: taint-deployment spec: replicas: 8 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
- Apply the file to create the deployment.
student@lfs458-node-1a0a:~$ kubectl apply -f taint.yaml deployment.apps/taint-deployment created
- Determine where the containers are running. In the following example three have been deployed on the master node
and five on the secondary node. Remember there will be other housekeeping containers created as well. Your numbers
may be slightly different.
student@lfs458-node-1a0a:~$ sudo docker ps |grep nginx 00c1be5df1e7 nginx@sha256:e3456c851a152494c3e.. <output_omitted> student@lfs458-node-1a0a:~$ sudo docker ps |wc -l 28 student@lfs458-worker:~$ sudo docker ps |wc -l 26
- Delete the deployment. Verify the containers are gone.
student@lfs458-node-1a0a:~$ kubectl delete deployment taint-deployment deployment.extensions "taint-deployment" deleted student@lfs458-node-1a0a:~$ sudo docker ps |wc -l 24
-
Now we will use a taint to affect the deployment of new containers. There are three taints, NoSchedule, PreferNoSchedule and NoExecute. The taints having to do with schedules will be used to determine newly deployed containers, but will not affect running containers. The use of NoExecute will cause running containers to move.
Taint the secondary node, verify it has the taint then create the deployment again. We will use the key of bubba to illustrate the key name is just some string an admin can use to track Pods.student@lfs458-node-1a0a:~$ kubectl taint nodes lfs458-worker \ bubba=value:PreferNoSchedule node/lfs458-worker tainted student@lfs458-node-1a0a:~$ kubectl describe node |grep Taint Taints: bubba=value:PreferNoSchedule Taints: <none> student@lfs458-node-1a0a:~$ kubectl apply -f taint.yaml deployment.apps/taint-deployment created
- Locate where the containers are running. We can see that more containers are on the master, but there still were some
created on the secondary. Delete the deployment when you have gathered the numbers.
student@lfs458-node-1a0a:~$ sudo docker ps |wc -l 32 student@lfs458-worker:~$ sudo docker ps |wc -l 22 student@lfs458-node-1a0a:~$ kubectl delete deployment taint-deployment deployment.extensions "taint-deployment" deleted
- Remove the taint, verify it has been removed. Note that the key is used with a minus sign appended to the end.
student@lfs458-node-1a0a:~$ kubectl taint nodes lfs458-worker bubba- node/lfs458-worker untainted student@lfs458-node-1a0a:~$ kubectl describe node |grep Taint Taints: <none> Taints: <none>
- This time use the NoSchedule taint, then create the deployment again. The secondary node should not have any new
containers, with only daemonsets and other essential pods running.
student@lfs458-node-1a0a:~$ kubectl taint nodes lfs458-worker \ bubba=value:NoSchedule node/lfs458-worker tainted student@lfs458-node-1a0a:~$ kubectl apply -f taint.yaml deployment.apps/taint-deployment created student@lfs458-node-1a0a:~$ sudo docker ps |wc -l 24 student@lfs458-worker:~$ sudo docker ps |wc -l 14
- Remove the taint and delete the deployment. When you have determined that all the containers are terminated create
the deployment again. Without any taint the containers should be spread across both nodes.
student@lfs458-node-1a0a:~$ kubectl delete deployment taint-deployment deployment.extensions "taint-deployment" deleted student@lfs458-node-1a0a:~$ kubectl taint nodes lfs458-worker bubba- node/lfs458-worker untainted student@lfs458-node-1a0a:~$ kubectl apply -f taint.yaml deployment.apps/taint-deployment created student@lfs458-node-1a0a:~$ sudo docker ps |wc -l 32 student@lfs458-worker:~$ sudo docker ps |wc -l 22
- Now use the NoExecute to taint the secondary node. Wait a minute then determine if the containers have moved. The DNS containers can take a while to shutdown. A few containers will remain on the worker node to continue communication from the cluster.
student@lfs458-node-1a0a:~$ kubectl taint nodes lfs458-worker \ bubba=value:NoExecute node "lfs458-worker" tainted student@lfs458-node-1a0a:~$ sudo docker ps |wc -l 32 student@lfs458-worker:~$ sudo docker ps |wc -l 6
- Remove the taint. Wait a minute. Note that all of the containers did not return to their previous placement.
student@lfs458-node-1a0a:~$ kubectl taint nodes lfs458-worker bubba- node/lfs458-worker untainted student@lfs458-node-1a0a:~$ sudo docker ps |wc -l 32 student@lfs458-worker:~$ sudo docker ps |wc -l 6
- In addition to the ability to taint a node you can also set the status to drain. First view the status, then destroy the existing deployment. Note that the status reports Ready, even though it will not allow containers to be executed. Also note that
the output mentioned that DaemonSet-managed pods are not affected by default, as we saw in an earlier lab. This time
lets take a closer look at what happens to existing pods and nodes.
Existing containers are not moved, but no new containers are created. You may receive an error error: unable to drain node "<your node>", aborting command...student@lfs458-node-1a0a:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION lfs458-node-1a0a Ready master 2d v1.12.1 lfs458-worker Ready <none> 2d v1.12.1 student@lfs458-node-1a0a:~$ kubectl drain lfs458-worker node/lfs458-worker cordoned error: DaemonSet-managed pods (use --ignore-daemonsets to ignore): kube-flannel-ds-fx3tx, kube-proxy-q2q4k
- Verify the state change of the node. It should indicate no new Pods will be scheduled.
student@lfs458-node-1a0a:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION lfs458-node-1a0a Ready master 2d v1.12.1 lfs458-worker Ready,SchedulingDisabled <none> 2d v1.12.1
- Delete the deployment to destroy the current Pods.
student@lfs458-node-1a0a:~$ kubectl delete deployment taint-deployment deployment.extensions "taint-deployment" deleted
- Create the deployment again and determine where the containers have been deployed
student@lfs458-node-1a0a:~$ kubectl apply -f taint.yaml deployment.apps/taint-deployment created student@lfs458-node-1a0a:~$ sudo docker ps |wc -l 44
- Return the status to Ready, then destroy and create the deployment again. The containers should be spread across the
nodes. Begin by removing the cordon on the node.
student@lfs458-node-1a0a:~$ kubectl uncordon lfs458-worker node/lfs458-worker uncordoned student@lfs458-node-1a0a:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION lfs458-node-1a0a Ready master 2d v1.12.1 lfs458-worker Ready <none> 2d v1.12.1
- Delete and re-create the deployment.
student@lfs458-node-1a0a:~$ kubectl delete deployment taint-deployment deployment.extensions "taint-deployment" deleted student@lfs458-node-1a0a:~$ kubectl apply -f taint.yaml deployment.apps/taint-deployment created
- View the docker ps output again. Both nodes should have almost the same number of containers deployed. The master will have a few more, due to its role.
- Remove the deployment a final time to free up resources.
student@lfs458-node-1a0a:~$ kubectl delete deployment taint-deployment deployment.extensions "taint-deployment" deleted
![]() |