Services
Exercise 8.1: Deploy A New Service
Overview
Services (also called microservices) are objects which declare a policy to access a logical set of Pods. They are typically assigned with labels to allow persistent access to a resource, when front or back end containers are terminated and replaced.
Native applications can use the Endpoints API for access. Non-native applications can use a Virtual IP-based bridge to access back end pods. ServiceTypes Type could be:
- ClusterIP default - exposes on a cluster-internal IP. Only reachable within cluster
- NodePort Exposes node IP at a static port. A ClusterIP is also automatically created.
- LoadBalancer Exposes service externally using cloud providers load balancer. NodePort and ClusterIP automatically created.
- ExternalName Maps service to contents of externalName using a CNAME record.
We use services as part of decoupling such that any agent or object can be replaced without interruption to access from client to back end application.
Deploy A New Service
- Deploy two nginx servers using kubectl and a new .yaml file. We will use the v1beta version of the API. The kind
should be Deployment and label it with nginx. Create two replicas and expose port 8080. What follows is a well
documented file. There is no need to include the comments when you create the file. This file can also be found among
the other examples in the tarball.
student@lfs458-node-1a0a:~$ vim nginx-one.yaml apiVersion: extensions/v1beta1 # Determines YAML versioned schema. kind: Deployment # Describes the resource defined in this file. metadata: name: nginx-one labels: system: secondary # Required string which defines object within namespace. namespace: accounting # Existing namespace resource will be deployed into. spec: replicas: 2 # How many Pods of following containers to deploy template: metadata: labels: app: nginx # Some string meaningful to users, not cluster. Keys # must be unique for each object. Allows for mapping # to customer needs. spec: containers: # Array of objects describing containerized application with a Pod. # Referenced with shorthand spec.template.spec.containers - image: nginx:1.7.9 # The Docker image to deploy imagePullPolicy: Always name: nginx # Unique name for each container, use local or Docker repo image ports: - containerPort: 8080 protocol: TCP # Optional resources this container may need to function. nodeSelector: system: secondOne # One method of node affinity.
- View the existing labels on the nodes in the cluster.
student@lfs458-node-1a0a:~$ kubectl get nodes --show-labels <output_omitted>
- Run the following command and look for the errors. Assuming there is no typo, you should have gotten an error about
about the accounting namespace.
student@lfs458-node-1a0a:~$ kubectl create -f nginx-one.yaml Error from server (NotFound): error when creating "nginx-one.yaml": namespaces "accounting" not found
- Create the namespace and try to create the deployment again. There should be no errors this time.
student@lfs458-node-1a0a:~$ kubectl create ns accounting namespace/accounting" created student@lfs458-node-1a0a:~$ kubectl create -f nginx-one.yaml deployment.extensions/nginx-one created
- View the status of the new nodes. Note they do not show a Running status.
student@lfs458-node-1a0a:~$ kubectl -n accounting get pods NAME READY STATUS RESTARTS AGE nginx-one-74dd9d578d-fcpmv 0/1 Pending 0 4m nginx-one-74dd9d578d-r2d67 0/1 Pending 0 4m
- View the node each has been assigned to (or not) and the reason, which shows under events at the end of the output.
student@lfs458-node-1a0a:~$ kubectl -n accounting describe pod \ nginx-one-74dd9d578d-fcpmv Name: nginx-one-74dd9d578d-fcpmv Namespace: accounting Node: <none> <output_omitted> Events: Type Reason Age From .... ---- ------ ---- ---- Warning FailedScheduling 37s (x25 over 2m29s) default-scheduler 0/2 nodes are available: 2 node(s) didn’t match node selector.
- Label the secondary node. Verify the labels.
student@lfs458-node-1a0a:~$ kubectl label node lfs458-worker \ system=secondOne node/lfs458-worker labeled student@lfs458-node-1a0a:~$ kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS lfs458-node-1a0a Ready master 1d1h v1.12.1 \ beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/ hostname=lfs458-node-1a0a,node-role.kubernetes.io/master= lfs458-worker Ready <none> 1d1h v1.12.1 \ beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/ hostname=lfs458-worker,system=secondOne
- View the pods in the accounting namespace. They may still show as Pending. Depending on how long it has been
since you attempted deployment the system may not have checked for the label. If the Pods show Pending after a
minute delete one of the pods. They should both show as Running after as a deletion. A change in state will cause the
Deployment controller to check the status of both Pods.
student@lfs458-node-1a0a:~$ kubectl -n accounting get pods NAME READY STATUS RESTARTS AGE nginx-one-74dd9d578d-fcpmv 1/1 Running 0 10m nginx-one-74dd9d578d-sts5l 1/1 Running 0 3s
- View Pods by the label we set in the YAML file. If you look back the Pods were given a label of app=nginx
student@lfs458-node-1a0a:~$ kubectl get pods -l app=nginx --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE accounting nginx-one-74dd9d578d-fcpmv 1/1 Running 0 20m accounting nginx-one-74dd9d578d-sts5l 1/1 Running 0 9m
- Recall that we exposed port 8080 in the YAML file. Expose the new deployment.
student@lfs458-node-1a0a:~$ kubectl -n accounting expose deployment nginx-one service/nginx-one exposed
- View the newly exposed endpoints. Note that port 8080 has been exposed on each Pod.
student@lfs458-node-9q6r:~$ kubectl -n accounting get ep nginx-one NAME ENDPOINTS AGE nginx-one 192.168.1.72:8080,192.168.1.73:8080 47s
- Attempt to access the Pod on port 8080, then on port 80. Even though we exposed port 8080 of the container the
application within has not been configured to listen on this port. The nginx server will listens on port 80 by default. A
curl command to that port should return the typical welcome page.
student@lfs458-node-1a0a:~$ curl 192.168.1.72:8080 curl: (7) Failed to connect to 192.168.1.72 port 8080: Connection refused student@lfs458-node-1a0a:~$ curl 192.168.1.72:80 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <output_omitted>
- Delete the deployment. Edit the YAML file to expose port 80 and create the deployment again.
student@lfs458-node-1a0a:~$ kubectl -n accounting delete deploy nginx-one deployment.extensions "nginx-one" deleted student@lfs458-node-1a0a:~$ vim nginx-one.yaml student@lfs458-node-1a0a:~$ kubectl create -f nginx-one.yaml deployment.extensions/nginx-one created
Exercise 8.2: Configure a NodePort
In a previous exercise we deployed a LoadBalancer which deployed a ClusterIP andNodePort automatically. In this exercise we will deploy a NodePort. While you can access a container from within the cluster, one can use a NodePort to NAT traffic from outside the cluster. One reason to deploy a NodePort instead, is that a LoadBalancer is also a load balancer resource from cloud providers like GKE and AWS.
- In a previous step we were able to view the nginx page using the internal Pod IP address. Now expose the deployment
using the --type=NodePort. We will also give it an easy to remember name and place it in the accounting namespace.
We could pass the port as well, which could help with opening ports in the firewall.
student@lfs458-node-1a0a:~$ kubectl -n accounting expose deployment \ nginx-one --type=NodePort --name=service-lab service/service-lab exposed
- View the details of the services in the accounting namespace. We are looking for the autogenerated port.
student@lfs458-node-1a0a:~$ kubectl -n accounting describe services .... NodePort: <unset> 32103/TCP ....
- Locate the exterior facing IP address of the cluster. As we are using GCP nodes, which we access via a FloatingIP,
we will first check the internal only public IP address. Look for the Kubernetes master URL.
student@lfs458-node-1a0a:~$ kubectl cluster-info Kubernetes master is running at https://10.128.0.3:6443 KubeDNS is running at https://10.128.0.3:6443/api/v1/namespaces/ kube-system/services/kube-dns/proxy To further debug and diagnose cluster problems, use ’kubectl cluster-info dump’.
- Test access to the nginx web server using the combination of master URL and NodePort
student@lfs458-node-1a0a:~$ curl http://10.128.0.3:32103 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title>
- Using the browser on your local system, use the public IP address you use to SSH into your node and the port. You should still see the nginx default page.
Exercise 8.3: Use Labels to Manage Resources
- Try to delete all Pods with the app=nginx label, in all namespaces. You should receive an error as this function must be
narrowed to a particular namespace. Then delete using the appropriate namespace.
student@lfs458-node-1a0a:~$ kubectl delete pods -l app=nginx \ --all-namespaces Error: unknown flag: --all-namespaces <output_omitted> student@lfs458-node-1a0a:~$ kubectl -n accounting delete pods -l app=nginx pod "nginx-one-74dd9d578d-fcpmv" deleted pod "nginx-one-74dd9d578d-sts5l" deleted
- View the Pods again. New versions of the Pods should be running as the controller responsible for them continues
student@lfs458-node-1a0a:~$ kubectl -n accounting get pods NAME READY STATUS RESTARTS AGE nginx-one-74dd9d578d-ddt5r 1/1 Running 0 1m nginx-one-74dd9d578d-hfzml 1/1 Running 0 1m
- We also gave a label to the deployment. View the deployment in the accounting namespace.
student@lfs458-node-1a0a:~$ kubectl -n accounting get deploy --show-labels NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE LABELS nginx-one 2 2 2 2 27m system=secondary
- Delete the deployment using its label.
student@lfs458-node-1a0a:~$ kubectl -n accounting delete deploy \ -l system=secondary deployment.extensions/nginx-one deleted
- Remove the label from the secondary node. Note that the syntax is a minus sign directly after the key you want to
remove, or system in this case.
student@lfs458-node-1a0a:~$ kubectl label node lfs458-worker systemnode/lfs458-worker labeled
![]() |