first question:
Task weight: 1%
You have access to multiple clusters from your main terminal through kubectl contexts. Write all those context names into /opt/course/1/contexts.
Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh, the command should use kubectl.
Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh, but without the use of kubectl.
Parse:
The title requirement is that you are visiting a multi-cluster now, you need to obtain the config context, the config current-context through the kubeclt command, and the config context and current-context not obtained through kubectl.
Answer:
1,kubectl config get-contexts
2,kubectl config current-context
3,cat $HOME/.kube/config |grep current
Second question:
Task weight: 3%
Use context: kubectl config use-context k8s-c1-H
Create a single Pod of image httpd:2.4.41-alpine in Namespace default. The Pod should be named pod1 and the container should be named pod1-container. This Pod should only be scheduled on a master node, do not add new labels any nodes.
Parse:
What this question requires is to create a pod. The name of the pod and the image and image name are all specified, and it is specified that it is scheduled to the master node. It follows that four conditions are required to complete this question.
This question examines the distinction between pod names and container names and the node scheduling of pods.
1,
Quickly generate template files for creating pod s through commands
kubectl run pod1 --image=httpd:2.4.41-alpine --dry-run=client -oyaml >pod1.yaml
The template file looks like this
apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod1 name: pod1 spec: containers: - image: httpd:2.4.41-alpine name: pod1 resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {}
2,
Based on the template file, modify it according to the meaning of the topic
According to the topic, we can know that the pod can be nodeselector or directly nodename, both of which are OK, but obviously nodename is simpler
nodeSelector method:
This method is a general method, but there are more things to write when writing, which is more troublesome, and it must be friendly
apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod1 name: pod1 spec: containers: - image: httpd:2.4.41-alpine name: pod1-container # change resources: {} dnsPolicy: ClusterFirst restartPolicy: Always tolerations: # add - effect: NoSchedule # add key: node-role.kubernetes.io/master # add nodeSelector: # add node-role.kubernetes.io/master: "" # add status: {}
The node labels are as follows;
Therefore, if it is a binary deployment cluster, this method is not applicable because there is no role label.
root@k8s-master:~# kubectl get no --show-labels NAME STATUS ROLES AGE VERSION LABELS k8s-master Ready control-plane,master 372d v1.22.10 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers= k8s-node1 Ready <none> 2d17h v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux k8s-node2 Ready <none> 2d17h v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux
nodename method:
You need to query the name of the master node first, this must be accurately queried
root@k8s-master:~# kubectl get no NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane,master 372d v1.22.10 k8s-node1 Ready <none> 2d17h v1.22.2 k8s-node2 Ready <none> 2d17h v1.22.2
apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: pod1 name: pod1 spec: containers: - image: httpd:2.4.41-alpine name: pod1-container # change resources: {} dnsPolicy: ClusterFirst restartPolicy: Always nodeName: k8s-master
3,
Apply template files and create pod s
kubectl apply -f pod1.yaml
The third question:
Task weight: 1%
Use context: kubectl config use-context k8s-c1-H
There are two Pods named o3db-* in Namespace project-c13. C13 management asked you to scale the Pods down to one replica to save resources.
Parse:
This topic is relatively simple. By querying the pods, you can see that the names of these pods are regularly numbered. Therefore, it can be inferred that they are deployed in the StateFulSet mode.
Therefore, just edit this sts directly
Query the pod again, and the result should be the same as the first picture above.
Fourth question:
Task weight: 4%
Use context: kubectl config use-context k8s-c1-H
Do the following in Namespace default. Create a single Pod named ready-if-service-ready of image nginx:1.16.1-alpine. Configure a LivenessProbe which simply runs true. Also configure a ReadinessProbe which does check if the url http://service-am-i-ready:80 is reachable, you can use wget -T2 -O- http://service-am-i-ready:80 for this. Start the Pod and confirm it isn't ready because of the ReadinessProbe.
Create a second Pod named am-i-ready of image nginx:1.16.1-alpine with label id: cross-server-ready. The already existing Service service-am-i-ready should now have that second Pod as endpoint.
Now the first Pod should be in ready state, confirm that.
Parse:
This question examines livnessPorbe and readnessProbe, that is, the survival probe and the operation probe
Use true for the survival probe. The operation probe required in the title is not clearly specified. It is recommended to use the command wget -T2 -O- http://service-am-i-ready:80, but it is OK for us to use port detection.
The service is already established in the environment, and the bound pod is the second pod created. Therefore, the second pod created must be accurate:
The creation file of the service (already created, just check it now, and make sure it is related to the second pod):
# Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 kind: Service metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"id":"cross-server-ready"},"name":"service-am-i-ready","namespace":"default"},"spec":{"ports":[{"port":80,"protocol":"TCP","targetPort":80}],"selector":{"id":"cross-server-ready"}},"status":{"loadBalancer":{}}} creationTimestamp: "2022-09-29T14:30:58Z" labels: id: cross-server-ready name: service-am-i-ready namespace: default resourceVersion: "4761" uid: 03981930-13d9-4133-8e23-9704c2a24807 spec: clusterIP: 10.109.238.68 clusterIPs: - 10.109.238.68 internalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - port: 80 protocol: TCP targetPort: 80 selector: id: cross-server-ready sessionAffinity: None type: ClusterIP status: loadBalancer: {}
The second way of writing the survival probe:
readinessProbe: exec: command: - sh - -c - 'wget -T2 -O- http://service-am-i-ready:80'
The first pod:
apiVersion: v1 kind: Pod metadata: labels: run: ready-if-service-ready name: ready-if-service-ready spec: containers: - image: nginx:1.16.1-alpine name: ready-if-service-ready ports: - containerPort: 80 readinessProbe: tcpSocket: port: 80 initialDelaySeconds: 5 periodSeconds: 5 livenessProbe: exec: command: - 'true' initialDelaySeconds: 5 periodSeconds: 5 dnsPolicy: ClusterFirst restartPolicy: Always
apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: id: cross-server-ready name: am-i-ready spec: containers: - image: nginx:1.16.1-alpine name: am-i-ready resources: {} dnsPolicy: ClusterFirst restartPolicy: Always status: {}
Fifth question:
Task weight: 1%
Use context: kubectl config use-context k8s-c1-H
There are various Pods in all namespaces. Write a command into /opt/course/5/find_pods.sh which lists all Pods sorted by their AGE (metadata.creationTimestamp).
Write a second command into /opt/course/5/find_pods_uid.sh which lists all Pods sorted by field metadata.uid. Use kubectl sorting for both commands.
Parse:
This question is relatively simple, and only requires the query command to be written into the file.
The first command is to query all pods, sorted by creation time, and write the query command to /opt/course/5/find_pods.sh
The second command is to query all pods, sorted by pod uid, and write the query command to /opt/course/5/find_pods_uid.sh
cat /opt/course/5/find_pods.sh kubectl get po --sort-by {.metadata.creationTimestamp} -A
cat /opt/course/5/find_pods_uid.sh kubectl get po --sort-by {.metadata.uid} -A
Sixth question:
Task weight: 8%
Use context: kubectl config use-context k8s-c1-H
Create a new PersistentVolume named safari-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.
Next create a new PersistentVolumeClaim in Namespace project-tiger named safari-pvc . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.
Finally create a new Deployment safari in Namespace project-tiger which mounts that volume at /tmp/safari-data. The Pods of that Deployment should be of image httpd:2.4.41-alpine.
Parse:
The difficulty of this question is medium, and the relevant codes have to be queried from the official website. After the pvc is established, the pvc will find a suitable pv by itself. Here we must understand that the pv is set to 2G, and the pvc is set to 2G, so only this pv is suitable, that is The connection established between the two through storage
pv creation:
cat safari-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: safari-pv spec: capacity: storage: 2Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete hostPath: path: /Volumes/Data
Creation of pvc:
cat safari-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: safari-pvc namespace: project-tiger spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
After pv and pvc are created, you can check their status. The two bond s are correct:
k8s@terminal:~$ kubectl get pv,pvc -A NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/safari-pv 2Gi RWO Delete Bound project-tiger/safari-pvc 4h2m NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE project-tiger persistentvolumeclaim/safari-pvc Bound safari-pv 2Gi RWO 3h55m
The use of pvc can be done according to the requirements of the topic here
cat safari-dep.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: safari name: safari namespace: project-tiger spec: replicas: 1 selector: matchLabels: app: safari strategy: {} template: metadata: labels: app: safari spec: containers: - image: httpd:2.4.41-alpine name: safari resources: {} volumeMounts: #Define the path to be mounted in the pod here - name: safari mountPath: /tmp/safari-data volumes: - name: safari #Consistent with the above mount directory persistentVolumeClaim: claimName: safari-pvc #
Seventh question:
Task weight: 1%
Use context: kubectl config use-context k8s-c1-H
The metrics-server has been installed in the cluster. Your college would like to know the kubectl commands to:
- show Nodes resource usage
- show Pods and their containers resource usage
Please write the commands into /opt/course/7/node.sh and /opt/course/7/pod.sh.
Parse:
The Metrics server environment has already been installed, so you don’t need to pay attention, just write these two commands into the corresponding files.
cat /opt/course/7/node.sh kubectl top nodes -A
cat /opt/course/7/pod.sh kubectl top pod --containers=true
Eighth question:
Task weight: 2%
Use context: kubectl config use-context k8s-c1-H
Ssh into the master node with ssh cluster1-master1. Check how the master components kubelet, kube-apiserver, kube-scheduler, kube-controller-manager and etcd are started/installed on the master node. Also find out the name of the DNS application and how it's started/installed on the master node.
Write your findings into file /opt/course/8/master-components.txt. The file should be structured like:
# /opt/course/8/master-components.txt kubelet: [TYPE] kube-apiserver: [TYPE] kube-scheduler: [TYPE] kube-controller-manager: [TYPE] etcd: [TYPE] dns: [TYPE] [NAME]
Choices of [TYPE] are: not-installed, process, static-pod, pod
Parse:
First log in to cluster1-master1, that is, ssh cluster1-master1, and then use the command kubectl get po -A to check the status of the pod and confirm how the pod is deployed. Options have been given in the title, and fill in the file in order Yes, the answer is below
Answer:
cat /opt/course/8/master-components.txt # /opt/course/8/master-components.txt kubelet:process kube-apiserver: static-pod kube-scheduler: static-pod kube-controller-manager:static-pod etcd: static-pod dns: pod coredns
Ninth question:
Task weight: 5%
Use context: kubectl config use-context k8s-c2-AC
Ssh into the master node with ssh cluster2-master1. Temporarily stop the kube-scheduler, this means in a way that you can start it again afterwards.
Create a single Pod named manual-schedule of image httpd:2.4-alpine, confirm its created but not scheduled on any node.
Now you're the scheduler and have all its power, manually schedule that Pod on node cluster2-master1. Make sure it's running.
Start the kube-scheduler again and confirm its running correctly by creating a second Pod named manual-schedule2 of image httpd:2.4-alpine and check if it's running on cluster2-worker1.
Parse:
This question examines the restart of the static pod, which is the last step. It is necessary to move the file /etc/kubernetes/manifests/kube-scheduler.yaml back and forth, and at the same time examine the node selection strategy.
cat manual-schedule.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: manual-schedule name: manual-schedule spec: containers: - image: httpd:2.4-alpine name: manual-schedule resources: {} dnsPolicy: ClusterFirst restartPolicy: Always nodeName: cluster2-master1 status: {}
cat manual-schedule2.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: manual-schedule2 name: manual-schedule2 spec: containers: - image: httpd:2.4-alpine name: manual-schedule2 resources: {} dnsPolicy: ClusterFirst restartPolicy: Always nodeName: cluster2-worker1 status: {}
Tenth question:
Task weight: 6%
Use context: kubectl config use-context k8s-c1-H
Create a new ServiceAccount processor in Namespace project-hamster. Create a Role and RoleBinding, both named processor as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.
Parse:
This question is RBAC
create role
kubectl -n project-hamster create role processor --verb=create --resource=secret --resource=configmap
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: processor namespace: project-hamster rules: - apiGroups: - "" resources: - secrets - configmaps verbs: - create
Create role binding
k -n project-hamster create rolebinding processor –role processor –serviceaccount project-hamster:processor
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: processor namespace: project-hamster roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: processor subjects: - kind: ServiceAccount name: processor namespace: project-hamster