1.Let us explore the environment first. How many nodes do you see in the cluster?
Including the controlplane and worker nodes.
# k get nodes
A)2
2.How many applications do you see hosted on the cluster?
Check the number of deployments.
# k get deployments.app
# k describe deployments.app
A) 1
3.Which nodes are the applications hosted on?
# k get pods -o wide
A) controplane, node01
4. We need to take node01 out for maintenance. Empty the node of all applications and mark it unschedulable.
-
Node node01 Unschedulable
- Pods evicted from node01
# k drain node01 --ignore-daemonsets
5. What nodes are the apps on now?
# k get pod -o wide
A) controlplane
6. The maintenance tasks have been completed. Configure the node node01 to be schedulable again.
- Node01 is Schedulable
# k uncordon node01
# k get nodes
A) Okay
7.How many pods are scheduled on node01 now?
A) 0
8.Why are there no pods on node01?
A) Only when new pods are created they will be scheduled
9.Why are the pods placed on the controlplane node?
Check the controlplane node details.
# k describe nodes controlplane
A) controlplane is not taints
10. okay
11.We need to carry out a maintenance activity on node01 again. Try draining the node again using the same command as before: kubectl drain node01 --ignore-daemonsets
Did that work?
# kubectl drain node01 --ignore-daemonsets
# NO
12 Why did the drain command fail on node01? It worked the first time!
# kubectl drain node01 --ignore-daemonsets
: cannot delete pods not managed by ReplicationController
A) there is a pod in nod01 which is not part of replicaset
13.What is the name of the POD hosted on node01 that is not part of a replicaset?
# k get pods -o wide
A) hr-app
14.What would happen to hr-app if node01 is drained forcefully?
Try it and see for yourself.
# kubectl drain node01 --force
: hr-app 의 노드를 node01로 떨구면 어떻게 될까?
: hr-app 은 영원히 소멸된당..
A) hr-app will be lost forever
15. Oops! We did not want to do that! hr-app is a critical application that should not be destroyed. We have now reverted back to the previous state and re-deployed hr-app as a deployment.
ok
16 hr-app is a critical app and we do not want it to be removed and we do not want to schedule any more pods on node01. Mark node01 as unschedulable so that no new pods are scheduled on this node.
Make sure that hr-app is not affected.
-
Node01 Unschedulable
- hr-app still running on node01?
# k cordon node01
# k get nodes
# k get pods- o wide
A) ok
'cloud > k8s(문제풀이)' 카테고리의 다른 글
[cka]kodekloud-cluster upgrade process (0) | 2022.06.22 |
---|---|
[cka]kodekloud-os UPgrade (0) | 2022.06.21 |
[cka]kodekloud-initContainer (0) | 2022.06.17 |
[cka] kodekloud-Multi Container PODS (0) | 2022.06.16 |
[cka]kodekloud-secrets (0) | 2022.06.16 |