Kubernetes Architecture
Exercise 4.1: Working with CPU and Memory Constraints
Overview
We will continue working with our cluster, which we built in the previous lab. We will work with resource limits, more with namespaces and then a complex deployment which you can explore to further understand the architecture and relationships.
Use SSH or PuTTY to connect to the nodes you installed in the previous exercise. We will deploy an application called stress inside a container, and then use resource limits to constrain the resources the application has access to use.
- Use a container called stress, which we will name hog, to generate load. Verify you have a container running.
student@lfs458-node-1a0a:~$ kubectl create deployment hog --image vish/stress deployment.apps/hog created student@lfs458-node-1a0a:~$ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hog 1 1 1 1 12s
- Use the describe argument to view details, then view the output in YAML format. Note there are no settings limiting
resource usage. Instead, there are empty curly brackets.
student@lfs458-node-1a0a:~$ kubectl describe deployment hog Name: hog Namespace: default CreationTimestamp: Fri, 09 Nov 2018 19:55:45 +0000 Labels: app=hog Annotations: deployment.kubernetes.io/revision: 1 <output_omitted> student@lfs458-node-1a0a:~$ kubectl get deployment hog -o yaml apiVersion: extensions/v1beta1 kind: Deployment Metadata: <output_omitted> template: metadata: creationTimestamp: null labels: app: hog spec: containers: - image: vish/stress imagePullPolicy: Always name: stress resources: {} terminationMessagePath: /dev/termination-log <output_omitted>
- We will use the YAML output to create our own configuration file. The --export option can be useful to not include
unique parameters.
student@lfs458-node-1a0a:~$ kubectl get deployment hog \ --export -o yaml > hog.yaml
- If you did not use the --export option we will need to remove the status output, creationTimestamp and other
settings, as we don’t want to set unique generated parameters. We will also add in memory limits found below.
student@lfs458-node-1a0a:~$ vim hog.yaml . imagePullPolicy: Always name: hog resources: # Edit to remove {} limits: # Add these 4 lines memory: "4Gi" requests: memory: "2500Mi" terminationMessagePath: /dev/termination-log terminationMessagePolicy: File
- Replace the deployment using the newly edited file.
student@lfs458-node-1a0a:~$ kubectl replace -f hog.yaml deployment.extensions/hog replaced
- Verify the change has been made. The deployment should now show resource limits.
student@lfs458-node-1a0a:~$ kubectl get deployment hog -o yaml |less .... resources: limits: memory: 4Gi requests: memory: 2500Mi terminationMessagePath: /dev/termination-log ....
- View the stdio of the hog container. Note how how much memory has been allocated.
student@lfs458-node-1a0a:~$ kubectl get po NAME READY STATUS RESTARTS AGE hog-64cbfcc7cf-lwq66 1/1 Running 0 2m student@lfs458-node-1a0a:~$ kubectl logs hog-64cbfcc7cf-lwq66 I1102 16:16:42.638972 1 main.go:26] Allocating "0" memory, in "4Ki" chunks, with a 1ms sleep between allocations I1102 16:16:42.639064 1 main.go:29] Allocated "0" memory
- Open a second and third terminal to access both master and second nodes. Run top to view resource usage. You should not see unusual resource usage at this point. The dockerd and top processes should be using about the same amount of resources. The stress command should not be using enough resources to show up.
- Edit the hog configuration file and add arguments for stress to consume CPU and memory.
student@lfs458-node-1a0a:~$ vim hog.yaml resources: limits: cpu: "1" memory: "4Gi" requests: cpu: "0.5" memory: "500Mi" args: - -cpus - "2" - -mem-total - "950Mi" - -mem-alloc-size - "100Mi" - -mem-alloc-sleep - "1s"
- . Delete and recreate the deployment. You should see CPU usage almost immediately and memory allocation happen in
100M chunks allocated to the stress program. Check both nodes as the container could deployed to either. The next
step will help if you have errors.
student@lfs458-node-1a0a:~$ kubectl delete deployment hog deployment.extensions/hog deleted student@lfs458-node-1a0a:~$ kubectl apply -f hog.yaml deployment.extensions/hog created
- Should the resources not show as used, there may have been an issue inside of the container. Kubernetes shows it
as running, but the actual workload has failed. Or the container may have failed; for example if you were missing a
parameter the container may panic and show the following output.
student@lfs458-node-1a0a:~$ kubectl get pod NAME READY STATUS RESTARTS AGE hog-1985182137-5bz2w 0/1 Error 1 5s student@lfs458-node-1a0a:~$ kubectl logs hog-1985182137-5bz2w panic: cannot parse ’150mi’: unable to parse quantity’s suffix goroutine 1 [running]: panic(0x5ff9a0, 0xc820014cb0) /usr/local/go/src/runtime/panic.go:481 +0x3e6 k8s.io/kubernetes/pkg/api/resource.MustParse(0x7ffe460c0e69, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0) /usr/local/google/home/vishnuk/go/src/k8s.io/kubernetes/pkg/api/resource/quantity.go:134 +0x287 main.main() /usr/local/google/home/vishnuk/go/src/github.com/vishh/stress/main.go:24 +0x43
- Here is an example of an improper parameter. The container is running, but not allocating memory. It should show the
usage requested from the YAML file.
student@lfs458-node-1a0a:~$ kubectl get po NAME READY STATUS RESTARTS AGE hog-1603763060-x3vnn 1/1 Running 0 8s student@lfs458-node-1a0a:~$ kubectl logs hog-1603763060-x3vnn I0927 21:09:23.514921 1 main.go:26] Allocating "0" memory, in "4Ki" chunks, with a 1ms sleep \ between allocations I0927 21:09:23.514984 1 main.go:39] Spawning a thread to consume CPU I0927 21:09:23.514991 1 main.go:39] Spawning a thread to consume CPU I0927 21:09:23.514997 1 main.go:29] Allocated "0" memory
Exercise 4.2: Resource Limits for a Namespace
The previous steps set limits for that particular deployment. You can also set limits on an entire namespace. We will create a new namespace and configure the hog deployment to run within. When set hog should not be able to use the previous amount of resources.
- Begin by creating a new namespace called low-usage-limit and verify it exists.
student@lfs458-node-1a0a:~$ kubectl create namespace low-usage-limit namespace/low-usage-limit created student@lfs458-node-1a0a:~$ kubectl get namespace NAME STATUS AGE default Active 1h kube-public Active 1h kube-system Active 1h low-usage-limit Active 42s
- Create a YAML file which limits CPU and memory usage. The kind to use is LimitRange.
student@lfs458-node-1a0a:~$ vim low-resource-range.yaml apiVersion: v1 kind: LimitRange metadata: name: low-resource-range spec: limits: - default: cpu: 1 memory: 500Mi defaultRequest: cpu: 0.5 memory: 100Mi type: Container
- Create the LimitRange object and assign it to the newly created namespace low-usage-limit
student@lfs458-node-1a0a:~$ kubectl create -f low-resource-range.yaml \ --namespace=low-usage-limit limitrange/low-resource-range created
- Verify it works. Remember that every command needs a namespace and context to work. Defaults are used if not
provided.
student@lfs458-node-1a0a:~$ kubectl get LimitRange No resources found. student@lfs458-node-1a0a:~$ kubectl get LimitRange --all-namespaces NAMESPACE NAME CREATED AT low-usage-limit low-resource-range 2018-07-08T06:28:33Z
- Create a new deployment in the namespace.
student@lfs458-node-1a0a:~$ kubectl create deployment limited-hog \ --image vish/stress -n low-usage-limit deployment.apps/limited-hog created
- List the current deployments. Note hog continues to run in the default namespace. If you chose to use the
Calico
network policy you may see a couple more than what is listed below
student@lfs458-node-1a0a:~$ kubectl get deployments --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default hog 1 1 1 1 25m kube-system kube-dns 1 1 1 1 2d low-usage-limit limited-hog 1 1 1 1 1m
- View all pods within the namespace. Remember you can use the tab key to complete the namespace. You may want to
type the namespace first so that tab-completion is appropriate to that namespace instead of the default namespace
student@lfs458-node-1a0a:~$ kubectl -n low-usage-limit get pods NAME READY STATUS RESTARTS AGE limited-hog-2556092078-wnpnv 1/1 Running 0 3m
- . Look at the details of the pod. You will note it has the settings inherited from the entire namespace. The use of shell
completion should work if you declare the namespace first
student@lfs459-node-1a0a:~$ kubectl -n low-usage-limit get pod limited-hog-2556092078-wnpnv -o yaml <output_omitted> spec: containers: - image: vish/stress imagePullPolicy: Always name: stress resources: limits: cpu: "1" memory: 500Mi requests: cpu: 500m memory: 100Mi terminationMessagePath: /dev/termination-log <output_omitted>
- . Copy and edit the config file for the original hog file. Add the namespace: line so that a new deployment would be in the
low-usage-limit namespace.
student@lfs458-node-1a0a:~$ cp hog.yaml hog2.yaml student@lfs458-node-1a0a:~$ vim hog2.yaml .... labels: app: hog name: hog namespace: low-usage-limit #<<--- Add this line selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/hog spec: ....
- Open up extra terminal sessions so you can have top running in each. When the new deployment is created it will probably be scheduled on the node not yet under any stress.
Create the deployment.student@lfs458-node-1a0a:~$ kubectl create -f hog2.yaml deployment.extensions/hog created
- View the deployments. Note there are two with the same name, but in different namespaces. You may also find the calico-typha deployment has no pods, nor has any requested. Our small cluster does not need to add Calico pods via this autoscaler.
student@lfs458-node-1a0a:~$ kubectl get deployments --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default hog 1 1 1 1 45m kube-system calico-typha 0 0 0 0 8h kube-system coredns 2 2 2 2 8h low-usage-limit hog 1 1 1 1 13s low-usage-limit limited-hog 1 1 1 1 5m
- Look at the top output running in other terminals. You should find that both hog deployments are using about the
same amount of resources, once the memory is fully allocated. Per-deployment settings override the global namespace
settings. You should see something like the following lines one from each node, which indicates use of one processor
and about 12 percent of your memory, were you on a system with 8G total.
25128 root 20 0 958532 954672 3180 R 100.0 11.7 0:52.27 stress 24875 root 20 0 958532 954800 3180 R 100.3 11.7 41:04.97 stress
- Delete the hog deployments to recover system resources.
student@lfs458-node-1a0a:~$ kubectl -n low-usage-limit delete deployment hog deployment.extensions "hog" deleted student@lfs458-node-1a0a:~$ kubectl delete deployment hog deployment.extensions "hog" deleted
Exercise 4.3: More Complex Deployment
We will now deploy a more complex demo application to test the cluster. When completed it will be a sock shopping site. The short URL is shown below for: https://raw.githubusercontent.com/microservices-demo/microservices-demo/ master/deploy/kubernetes/complete-demo.yaml
- Begin by downloading the pre-made YAML file from github.
student@lfs458-node-1a0a:~$ wget https://tinyurl.com/y8bn2awp -O complete-demo.yaml Resolving tinyurl.com (tinyurl.com)... 104.20.218.42, 104.20.219.42, Connecting to tinyurl.com (tinyurl.com)|104.20.218.42|:443... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: https://raw.githubusercontent.com/microservices-demo/microservices-... --2017-11-02 16:54:27-- https://raw.githubusercontent.com/microservices-dem... Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.5... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.... HTTP request sent, awaiting response... 200 OK <output_omitted>
- Find the expected namespaces inside the file. It should be sock-shop. Also note the various settings. This file will
deploy several containers which work together, providing a shopping website. As we work with other parameters you
could revisit this file to see potential settings.
student@lfs458-node-1a0a:~$ less complete-demo.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: carts-db labels: name: carts-db namespace: sock-shop spec: replicas: 1 <output_omitted>
- Create the namespace and verify it was made.
student@lfs458-node-1a0a:~$ kubectl create namespace sock-shop namespace/sock-shop created student@lfs458-node-1a0a:~$ kubectl get namespace NAME STATUS AGE default Active 35m kube-public Active 35m kube-system Active 35m low-usage-limit Active 25m sock-shop Active 5s
- View the images the new application will deploy
student@lfs458-node-1a0a:~$ grep image complete-demo.yaml image: mongo image: weaveworksdemos/carts:0.4.8 image: weaveworksdemos/catalogue-db:0.3.0 image: weaveworksdemos/catalogue:0.3.5 image: weaveworksdemos/front-end:0.3.12 image: mongo <output_omitted>
- Create the new shopping website using the YAML file. Use the namespace you recently created. Note that the deployments
match the images we saw in the file.
student@lfs458-node-1a0a:~$ kubectl apply -n sock-shop -f complete-demo.yaml deployment "carts-db" created service "carts-db" created deployment "carts" created service "carts" created <output_omitted>
- Using the proper namespace will be important. This can be set on a per-command basis or as a shell parameter. Note
the first command shows no pods. We must remember to pass the proper namespace. Some containers may not have
fully downloaded or deployed by the time you run the command.
student@lfs458-node-1a0a:~$ kubectl get pods No resources found. student@lfs458-node-1a0a:~$ kubectl -n sock-shop get pods NAME READY STATUS RESTARTS AGE carts-511261774-c4jwv 1/1 Running 0 71s carts-db-549516398-tw9zs 1/1 Running 0 71s catalogue-4293036822-sp5kt 1/1 Running 0 71s catalogue-db-1846494424-qzhvk 1/1 Running 0 71s front-end-2337481689-6s65c 1/1 Running 0 71s orders-208161811-1gc6k 1/1 Running 0 71s orders-db-2069777334-4sp01 1/1 Running 0 71s payment-3050936124-2cn2l 1/1 Running 0 71s queue-master-2067646375-vzq77 1/1 Running 0 71s rabbitmq-241640118-vk3m9 0/1 ContainerCreating 0 71s shipping-3132821717-lm7kn 0/1 ContainerCreating 0 71s user-1574605338-24xrb 0/1 ContainerCreating 0 71s user-db-2947298815-lx9kp 1/1 Running 0 71s
- Verify the shopping cart is exposing a web page. Use the public IP address of your AWS node (not the one derived from
the prompt) to view the page. Note the external IP is not yet configured. Find the NodePort service. First try port 80
then try port 30001 as shown under the PORTS column.
student@lfs458-node-1a0a:~$ kubectl get svc -n sock-shop NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE carts ClusterIP 10.100.154.148 <none> 80/TCP 95s carts-db ClusterIP 10.111.120.73 <none> 27017/TCP 95s catalogue ClusterIP 10.100.8.203 <none> 80/TCP 95s catalogue-db ClusterIP 10.111.94.74 <none> 3306/TCP 95s front-end NodePort 10.98.2.137 <none> 80:30001/TCP 95s orders ClusterIP 10.110.7.215 <none> 80/TCP 95s orders-db ClusterIP 10.106.19.121 <none> 27017/TCP 95s payment ClusterIP 10.111.28.218 <none> 80/TCP 95s queue-master ClusterIP 10.102.181.253 <none> 80/TCP 95s rabbitmq ClusterIP 10.107.134.121 <none> 5672/TCP 95s shipping ClusterIP 10.99.99.127 <none> 80/TCP 95s user ClusterIP 10.105.126.10 <none> 80/TCP 95s user-db ClusterIP 10.99.123.228 <none> 27017/TCP 95s
- Check to see which node is running the containers. Note that the webserver is answering on a node which is not hosting
the all the containers. First we check the master, then the second node. The containers should have to do with kube
proxy services and calico. The following is the sudo docker ps on both nodes.
student@lfs458-node-1a0a:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d6b7353e5dc5 weaveworksdemos/user@sha256:2ffccc332963c89e035fea52201012208bf62df43a55fe461ad6598a5c757ab7 "/user -port=80" 2 minutes ago Up 2 minutes k8s_user_user-7848fb86db-5zmkj_sock-shop_584d7db5-947b-11e8-8cfb-42010a800002_0 6c18f030f15b weaveworksdemos/shipping@sha256:983305c948fded487f4a4acdeab5f898e89d577b4bc1ca3de7750076469ccad4 "/usr/local/bin/ja..." 2 minutes ago Up 2 minutes k8s_shipping_shipping-64f8c7558c-9kgm2_sock-shop_580a50f9-947b-11e8-8cfb-42010a800002_0 baaa8d67ebef weaveworksdemos/queue-master@sha256:6292d3095f4c7aeed8d863527f8ef6d7a75d3128f20fc61e57f398c100142712 "/usr/local/bin/ja..." 2 minutes ago Up 2 minutes k8s_queue-master_queue-master-787b68b7fd-2tld8_sock-shop_57dca0ab-947b-11e8-8cfb-42010a800002_0 <output_omitted> student@lfs458-worker:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9452559caa0d weaveworksdemos/payment@sha256:5ab1c9877480a018d4dda10d6dfa382776e6bca9fc1c60bacbb80903fde8cfe0 "/app -port=80" 2 minutes ago Up 2 minutes k8s_payment_payment-5df6dc6bcc-k2hbl_sock-shop_57c79b30-947b-11e8-8cfb-42010a800002_0 993017c7b476 weaveworksdemos/user-db@sha256:b43f0f8a76e0c908805fcec74d1ad7f4af4d93c4612632bd6dc20a87508e0b68 "/entrypoint.sh mo..." 2 minutes ago Up 2 minutes k8s_user-db_user-db-586b8566b4-j7f24_sock-shop_58418841-947b-11e8-8cfb-42010a800002_0 1356b0548ee8 weaveworksdemos/orders@sha256:b622e40e83433baf6374f15e076b53893f79958640fc6667dff597622eff03b9 "/usr/local/bin/ja..." 2 minutes ago Up 2 minutes k8s_orders_orders-5c4f477565-gzh7x_sock-shop_57bf7576-947b-11e8-8cfb-42010a800002_0 <output_omitted>
- Now we will shut down the shopping application. This can be done a few different ways. Begin by getting a listing of
resources in all namespaces. There should be about 14 deployments.
student@lfs458-node-1a0a:~$ kubectl get deployment --all-namespaces NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kube-system calico-typha 0 0 0 0 33m kube-system coredns 2 2 2 2 33m low-usage-limit limited-hog 1 1 1 1 33m sock-shop carts 1 1 1 1 6m44s sock-shop carts-db 1 1 1 1 6m44s sock-shop catalogue 1 1 1 1 6m44s <output_omitted>
- Use the terminal on the second node to get a count of the current docker containers. It should be something like 30,
plus a line for status counted by wc. The main system should have something like 26 running, plus a line of status.
student@lfs458-node-1a0a:~$ sudo docker ps | wc -l 26 student@lfs458-worker:~$ sudo docker ps | wc -l 30
- In order to complete maintainence we may need to move containers from a node and prevent new ones from deploying.
One way to do this is to drain, or cordon, the node. Currently this will not affect DaemonSets, an object we will discuss
in greater detail in the future. Begin by getting a list of nodes. Your node names will be different.
student@lfs458-node-1a0a:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION lfs458-worker Ready <none> 3h v1.12.1 lfs458-node-1a0a Ready master 3h v1.12.1
- Modifying your second, worker node, update the node to drain the pods. Some resources may not drain, expect an
error which we will work with next. Note the error includes aborting command which indicates the drain did not take
place. Were you to check it would have the same number of containers running, but will show a new taint preventing the
scheduler from assigning new pods.
student@lfs458-node-1a0a:~$ kubectl drain lfs458-worker node/lfs458-worker cordoned error: unable to drain node "lfs458-worker", aborting command... There are pending nodes to be drained: lfs458-worker error: DaemonSet-managed pods (use --ignore-daemonsets to ignore): calico-node-vndn7, kube-proxy-rjjls student@lfs458-node-1a0a:~$ kubectl describe node |grep -i taint Taints: <none> Taints: node.kubernetes.io/unschedulable:NoSchedule
- As the error output suggests we can use the –ignore-daemonsets options to ignore containers which are not intended
to move. We will find a new error when we use this command, near the end of the output. The node will continue to
have the same number of pods and containers running.
student@lfs458-node-1a0a:~$ kubectl drain lfs458-worker --ignore-daemonsets node/worker cordoned error: unable to drain node "lfs458-worker", aborting command... There are pending nodes to be drained: lfs458-worker error: pods with local storage (use --delete-local-data to override): carts-55f7f5c679-ffkq2, carts-db-5c55874946-w728d, orders-7b69bf5686-vtkcn
- Run the command again. This time the output should both indicate the node has already been cordoned, then show the
eviction of several pods. Not all pods will be gone as daemonsets will remain. Note the command is shown on two lines.
You can omit the backslash and type the command on a single line.
student@lfs458-node-1a0a:~$ kubectl drain lfs458-worker \ --ignore-daemonsets --delete-local-data node/lfs458-worker already cordoned WARNING: Ignoring DaemonSet-managed pods: calico-node-vndn7, kube-proxy-rjjls; Deleting pods with local storage: carts-55f7f5c679-ppv7p, carts-db-5c55874946-h42v2, orders-7b69bf5686-t82lz, orders-db-7bc46bdb98-x5zrl pod/carts-db-5c55874946-h42v2 evicted pod/orders-db-7bc46bdb98-x5zrl evicted pod/catalogue-db-66ff5bbbf5-2wmx4 evicted pod/catalogue-5764fdf6d-8gk96 evicted pod/orders-7b69bf5686-t82lz evicted pod/front-end-f99dbcb9c-92q4p evicted pod/carts-55f7f5c679-ppv7p evicted
- Were you to look on your second, worker node, you would see there should be fewer pods and containers than before.
These pods can only be evicted via a special taint which we will discuss in the scheduling chapter
student@lfs458-worker:~$ sudo docker ps | wc -l 6
- Update the node taint such that the scheduler will use the node again. Verify that no nodes have moved over to the
worker node as the scheduler only checks when a pod is deployed.
student@lfs458-node-1a0a:~$ kubectl uncordon lfs458-worker node/lfs458-worker uncordoned student@lfs458-node-1a0a:~$ kubectl describe node |grep -i taint Taints: <none> Taints: <none> student@lfs458-worker:~$ sudo docker ps | wc -l 6
- As we clean up our sock shop let us see some differences between pods and deployments. Start with a list of the pods
that are running in the sock-shop namespace.
student@lfs458-node-1a0a:~$ kubectl -n sock-shop get pod NAME READY STATUS RESTARTS AGE carts-db-549516398-tw9zs 1/1 Running 0 6h catalogue-4293036822-sp5kt 1/1 Running 0 6h <output_omitted>
- Delete a few resources using the pod name.
student@lfs458-node-1a0a:~$ kubectl -n sock-shop delete pod \ catalogue-4293036822-sp5kt catalogue-db-1846494424-qzhvk \ front-end-2337481689-6s65c orders-208161811-1gc6k \ orders-db-2069777334-4sp01 pod "catalogue-4293036822-sp5kt" deleted pod "catalogue-db-1846494424-qzhvk" deleted <output_omitted>
- Check the status of the pods. There should be some pods running for only a few seconds. These will have the same
name-stub as the Pods you recently deleted. The Deployment controller noticed expected number of Pods was not
proper, so created new Pods until the current state matches the Pod manifest.
student@lfs458-node-1a0a:~$ kubectl -n sock-shop get pod NAME READY STATUS RESTARTS AGE catalogue-4293036822-mtz8m 1/1 Running 0 22s catalogue-db-1846494424-16n2p 1/1 Running 0 22s front-end-2337481689-6s65c 1/1 Terminating 0 6h front-end-2337481689-80gwt 1/1 Running 0 22s
- Delete some of the resources via deployments.
student@lfs458-node-1a0a:~$ kubectl -n sock-shop delete deployment \ catalogue catalogue-db front-end orders deployment "catalogue" deleted deployment "catalogue-db" deleted
- Check and both the pods and deployments you removed have not been recreated.
student@lfs458-node-1a0a:~$ kubectl -n sock-shop get pods |grep catalogue student@lfs458-node-1a0a:~$ kubectl -n sock-shop get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE carts 1 1 1 1 71m carts-db 1 1 1 1 71m orders-db 1 1 1 1 71m payment 1 1 1 1 71m queue-master 1 1 1 1 71m rabbitmq 1 1 1 1 71m shipping 1 1 1 1 71m user 1 1 1 1 71m user-db 1 1 1 1 71m
- Delete the rest of the deployments. When no resources are found, examine the output of the docker ps command.
None of the sock-shop containers should be found. Use the same file we created with to delete all of the objects made.
You will get some errors because we deleted a few deployments by hand.
student@lfs458-node-1a0a:~$ kubectl delete -f complete-demo.yaml <output_omitted>
![]() |