Kubernetes affinity特性
1 简介
1.1 Node affinity
Kubernetes node affinity特性与node selector特性相似,可以要求Kubernetes Scheduler将Pod调度到定义了某些labels的node上运行
1.1.1 示意图

1.2 Pod affinity
Kubernetes Pod affinity可以要求Kubernetes Scheduler将Pod调度到运行了某些定义了特点labels的pod的node上运行
1.2.1 示意图

2 学习背景
-
Kubernetes 节点以diskless node的类型加入到LINSTOR集群,从而Kubernetes的Pod可以使用LINSTOR persistent volume
image.png
- 存在一个Deployment中多个Pods同时使用同一个LINSTOR persistent volume的情况,在LINSTOR persistent volume被挂载的node,DRBD resource的状态是primary。如果此时有Pod被调度到其他node,则没有primary的DRBD resource,无法被挂载,Pod无法正常启动。所以需要使用Pod affinity特性让使用同一个LINSTOR persistent volume的Pods在同一个node运行

-
存在一个Kubernetes 集群中只有部分nodes加入到LINSTOR 集群,使用LINSTOR persistent volume的情况。所以需要使用Pod affinity特性让Pods仅在加入LINSTOR 集群的nodes之间调度
image.png
3 测试
3.1 Node affinity
3.1.1 查看Node labels
root@ubuntu:~# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s1 Ready control-plane,master 44d v1.20.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
k8s2 Ready control-plane,master 44d v1.20.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s2,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
k8s3 Ready control-plane,master 44d v1.20.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s3,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=
k8s4 Ready <none> 44d v1.20.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s4,kubernetes.io/os=linux,linbit.com/hostname=k8s4,linbit.com/sp-DfltDisklessStorPool=true,linbit.com/sp-poolk8s=true,lstpvc=weight1
k8s5 Ready <none> 44d v1.20.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s5,kubernetes.io/os=linux,linbit.com/hostname=k8s5,linbit.com/sp-DfltDisklessStorPool=true,linbit.com/sp-poolk8s=true,lstpvc=weight1
3.1.2 编写带有node affinity的Deployment yaml file
root@ubuntu:/home/ubuntu/k8syaml/lst# cat affinitynode.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodeaffinity
spec:
replicas: 20
strategy:
type: RollingUpdate
selector:
matchLabels:
app: nginx3
template:
metadata:
labels:
app: nginx3
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: nginx
operator: In
values:
- alpine
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
要求Pod必须运行在带有nginx=apline的node上运行
3.1.3 执行apply之后查看pod状态
root@ubuntu:/home/ubuntu/k8syaml/lst# kubectl apply -f affinitynode.yaml
deployment.apps/nodeaffinity created
root@ubuntu:/home/ubuntu/k8syaml/lst# kubectl get pod | grep nodeaffinity
nodeaffinity-64df9476f7-5l9fh 0/1 Pending 0 115s
nodeaffinity-64df9476f7-5lw28 0/1 Pending 0 115s
nodeaffinity-64df9476f7-757ff 0/1 Pending 0 116s
nodeaffinity-64df9476f7-7bhqz 0/1 Pending 0 116s
nodeaffinity-64df9476f7-7h4hf 0/1 Pending 0 115s
nodeaffinity-64df9476f7-9mqpr 0/1 Pending 0 116s
nodeaffinity-64df9476f7-9r44r 0/1 Pending 0 115s
nodeaffinity-64df9476f7-9sqzv 0/1 Pending 0 116s
nodeaffinity-64df9476f7-b4697 0/1 Pending 0 116s
nodeaffinity-64df9476f7-cdmm6 0/1 Pending 0 116s
nodeaffinity-64df9476f7-crxfq 0/1 Pending 0 116s
nodeaffinity-64df9476f7-hlnxl 0/1 Pending 0 116s
nodeaffinity-64df9476f7-k9tws 0/1 Pending 0 116s
nodeaffinity-64df9476f7-ldf8d 0/1 Pending 0 116s
nodeaffinity-64df9476f7-q2mkq 0/1 Pending 0 116s
nodeaffinity-64df9476f7-qwzz4 0/1 Pending 0 116s
nodeaffinity-64df9476f7-tm7qd 0/1 Pending 0 116s
nodeaffinity-64df9476f7-vgmn7 0/1 Pending 0 115s
nodeaffinity-64df9476f7-wxw4b 0/1 Pending 0 116s
nodeaffinity-64df9476f7-xm9nr 0/1 Pending 0 116s
Pod都处于pending状态,因为此时没有node带有nginx=alpine的labels
3.1.4 给k8s4 添加nginx=alpine的label
root@ubuntu:/home/ubuntu/k8syaml/lst# kubectl label node k8s4 nginx=alpine
node/k8s4 labeled
3.1.5 查看Pod状态
root@ubuntu:/home/ubuntu/k8syaml/lst# kubectl get pod -o wide| grep nodeaffinity
nodeaffinity-64df9476f7-5l9fh 1/1 Running 0 4m18s 10.244.3.180 k8s4 <none> <none>
nodeaffinity-64df9476f7-5lw28 1/1 Running 0 4m18s 10.244.3.177 k8s4 <none> <none>
nodeaffinity-64df9476f7-757ff 1/1 Running 0 4m19s 10.244.3.168 k8s4 <none> <none>
nodeaffinity-64df9476f7-7bhqz 1/1 Running 0 4m19s 10.244.3.166 k8s4 <none> <none>
nodeaffinity-64df9476f7-7h4hf 1/1 Running 0 4m18s 10.244.3.184 k8s4 <none> <none>
nodeaffinity-64df9476f7-9mqpr 1/1 Running 0 4m19s 10.244.3.178 k8s4 <none> <none>
nodeaffinity-64df9476f7-9r44r 1/1 Running 0 4m18s 10.244.3.181 k8s4 <none> <none>
nodeaffinity-64df9476f7-9sqzv 1/1 Running 0 4m19s 10.244.3.172 k8s4 <none> <none>
nodeaffinity-64df9476f7-b4697 1/1 Running 0 4m19s 10.244.3.171 k8s4 <none> <none>
nodeaffinity-64df9476f7-cdmm6 1/1 Running 0 4m19s 10.244.3.169 k8s4 <none> <none>
nodeaffinity-64df9476f7-crxfq 1/1 Running 0 4m19s 10.244.3.165 k8s4 <none> <none>
nodeaffinity-64df9476f7-hlnxl 1/1 Running 0 4m19s 10.244.3.175 k8s4 <none> <none>
nodeaffinity-64df9476f7-k9tws 1/1 Running 0 4m19s 10.244.3.173 k8s4 <none> <none>
nodeaffinity-64df9476f7-ldf8d 1/1 Running 0 4m19s 10.244.3.179 k8s4 <none> <none>
nodeaffinity-64df9476f7-q2mkq 1/1 Running 0 4m19s 10.244.3.174 k8s4 <none> <none>
nodeaffinity-64df9476f7-qwzz4 1/1 Running 0 4m19s 10.244.3.176 k8s4 <none> <none>
nodeaffinity-64df9476f7-tm7qd 1/1 Running 0 4m19s 10.244.3.183 k8s4 <none> <none>
nodeaffinity-64df9476f7-vgmn7 1/1 Running 0 4m18s 10.244.3.170 k8s4 <none> <none>
nodeaffinity-64df9476f7-wxw4b 1/1 Running 0 4m19s 10.244.3.182 k8s4 <none> <none>
nodeaffinity-64df9476f7-xm9nr 1/1 Running 0 4m19s 10.244.3.167 k8s4 <none> <none>
Pods都运行在带有nginx=alpine的label的k8s4 节点
3.2 Pod affinity
3.2.1 查看worker node上现有Pods的labels
root@ubuntu:/home/ubuntu/k8syaml/lst# kubectl get pod -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
ng1 1/1 Running 0 23h 10.244.4.212 k8s5 <none> <none> app=nginx4,run=ng1
ng2-5c4c569779-25kcq 1/1 Running 0 46h 10.244.4.171 k8s5 <none> <none> app=ng2,pod-template-hash=5c4c569779
ng2-5c4c569779-nwf6f 1/1 Running 0 46h 10.244.4.172 k8s5 <none> <none> app=ng2,pod-template-hash=5c4c569779
3.2.2 编写带有node affinity的Deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: podaffinity
spec:
replicas: 5
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions: #<---- unlike nodeSelectorTerms in case of nodeAffinity, this is not a list, so the minus sign is missing here
- key: nginx
operator: In
values:
- alpine
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
要求Pod必须运行在某个运行着带有nginx=apline的pod的node上运行
3.2.3 执行apply之后查看pod状态
root@ubuntu:/home/ubuntu/k8syaml/lst# kubectl apply -f affinitypod.yaml
deployment.apps/podaffinity created
root@ubuntu:/home/ubuntu/k8syaml/lst# kubectl get pod | grep podaffinity
podaffinity-6b4685c8f5-7tcks 0/1 Pending 0 17s
podaffinity-6b4685c8f5-jq4w4 0/1 Pending 0 17s
podaffinity-6b4685c8f5-str5t 0/1 Pending 0 17s
podaffinity-6b4685c8f5-v2cpc 0/1 Pending 0 17s
podaffinity-6b4685c8f5-xr6x4 0/1 Pending 0 17s
Pods都处于pending状态,因为此时没有node上面运行着带有nginx=alpine的labels的pod
3.2.4 给k8s5上的某个Pod添加nginx=alpine的label
root@ubuntu:/home/ubuntu/k8syaml/lst# kubectl label pod ng1 nginx=alpine
pod/ng1 labeled
3.2.5 查看Pod状态
root@ubuntu:/home/ubuntu/k8syaml/lst# kubectl get pod -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
ng1 1/1 Running 0 23h 10.244.4.212 k8s5 <none> <none> app=nginx4,nginx=alpine,run=ng1
ng2-5c4c569779-25kcq 1/1 Running 0 46h 10.244.4.171 k8s5 <none> <none> app=ng2,pod-template-hash=5c4c569779
ng2-5c4c569779-nwf6f 1/1 Running 0 46h 10.244.4.172 k8s5 <none> <none> app=ng2,pod-template-hash=5c4c569779
podaffinity-6b4685c8f5-7tcks 1/1 Running 0 2m59s 10.244.4.221 k8s5 <none> <none> app=nginx,pod-template-hash=6b4685c8f5
podaffinity-6b4685c8f5-jq4w4 1/1 Running 0 2m59s 10.244.4.222 k8s5 <none> <none> app=nginx,pod-template-hash=6b4685c8f5
podaffinity-6b4685c8f5-str5t 1/1 Running 0 2m59s 10.244.4.219 k8s5 <none> <none> app=nginx,pod-template-hash=6b4685c8f5
podaffinity-6b4685c8f5-v2cpc 1/1 Running 0 2m59s 10.244.4.220 k8s5 <none> <none> app=nginx,pod-template-hash=6b4685c8f5
podaffinity-6b4685c8f5-xr6x4 1/1 Running 0 2m59s 10.244.4.218 k8s5 <none> <none> app=nginx,pod-template-hash=6b4685c8f5
Pods都运行在带有nginx=alpine的pod的同一个节点
4 Hard & Soft rule
以上举例都是hard rule的affinity,即如果不符合规则,则Pods会一直处于pending的状态。
但是同时存在soft rule的affinity,即如果不符合规则,Kubernetes还是会调度Pods运行在合适的节点。
soft rule在此不举例说明
5 在使用LINSTOR persistent volume的情况下使用affinity特性
5.1 Affinity选择
- 首先给加入LINSTOR 集群的worker nodes打上同样的label
- Deployment yaml文件加上hard rule的node affinity属性
- Deployment yaml文件加上soft rule的pod affinity属性(假如是hard rule的pod affinity,正常情况下,假如在A节点运行着pods,当A节点宕机,此时Kubernetes集群会检测到宕机情况,Pods会处于Terminating的状态,但是集群中并没有其他节点符合pod affinity的规则,所以Pods无法failover到其他节点,所以这里需要添加soft rule的pod affinity属性)
5.2 Deployment yaml文件展示
root@ubuntu:/home/ubuntu/k8syaml/lst# cat affinityprefer.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
spec:
replicas: 20
strategy:
type: RollingUpdate
selector:
matchLabels:
app: nginx3
template:
metadata:
labels:
app: nginx3
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: lstpvc
operator: In
values:
- weight1
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
topologyKey: kubernetes.io/hostname #<---- unlike nodeSelectorTerms in case of nodeAffinity, this is not a list, so the minus sign is missing here
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx3
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: linstor-volume
mountPath: /usr/share/nginx/html
volumes:
- name: linstor-volume
persistentVolumeClaim:
claimName: testpvc
此yaml文件中Pod affinity匹配的label正是自身这个yaml文件创建出来的Pods会带有的label app = nginx3。经过测试,这种情况下不需要先准备带有app = nginx3的pod
5.3 执行apply之后查看pod状态
root@ubuntu:/home/ubuntu/k8syaml/lst# kubectl apply -f affinityprefer.yaml
deployment.apps/nginx2 created
root@ubuntu:/home/ubuntu/k8syaml/lst# kubectl get pod -o wide --show-labels | grep nginx2
nginx2-dc58774d-442xm 1/1 Running 0 44s 10.244.3.195 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-4zskb 1/1 Running 0 45s 10.244.3.203 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-67wtg 1/1 Running 0 44s 10.244.3.191 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-6djs2 1/1 Running 0 44s 10.244.3.201 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-72t52 1/1 Running 0 44s 10.244.3.194 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-br5h8 1/1 Running 0 44s 10.244.3.204 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-fmh4x 1/1 Running 0 45s 10.244.3.187 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-l5lhl 1/1 Running 0 45s 10.244.3.193 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-lh468 1/1 Running 0 45s 10.244.3.198 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-pdt8z 1/1 Running 0 45s 10.244.3.202 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-ptfzf 1/1 Running 0 44s 10.244.3.192 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-rzkq8 1/1 Running 0 45s 10.244.3.185 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-sv4c5 1/1 Running 0 44s 10.244.3.189 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-v98bv 1/1 Running 0 44s 10.244.3.197 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-w4ltn 1/1 Running 0 44s 10.244.3.196 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-wc8nv 1/1 Running 0 44s 10.244.3.200 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-wdbdf 1/1 Running 0 45s 10.244.3.190 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-xphfc 1/1 Running 0 44s 10.244.3.188 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-z2kf6 1/1 Running 0 44s 10.244.3.199 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-z7p9s 1/1 Running 0 44s 10.244.3.186 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
Pods全部运行在k8s4 节点
5.4 Failover测试
5.4.1 查看k8s5节点运行的Pods的labels状态
并没有任何Pod带有 app=nginx3 的label
root@ubuntu:/home/ubuntu/k8syaml/lst# kubectl get pod -o wide --show-labels | grep k8s5
ng1 1/1 Running 0 23h 10.244.4.212 k8s5 <none> <none> app=nginx4,nginx=alpine,run=ng1
ng2-5c4c569779-25kcq 1/1 Running 0 47h 10.244.4.171 k8s5 <none> <none> app=ng2,pod-template-hash=5c4c569779
ng2-5c4c569779-nwf6f 1/1 Running 0 47h 10.244.4.172 k8s5 <none> <none> app=ng2,pod-template-hash=5c4c569779
podaffinity-6b4685c8f5-7tcks 1/1 Running 0 22m 10.244.4.221 k8s5 <none> <none> app=nginx,pod-template-hash=6b4685c8f5
podaffinity-6b4685c8f5-jq4w4 1/1 Running 0 22m 10.244.4.222 k8s5 <none> <none> app=nginx,pod-template-hash=6b4685c8f5
podaffinity-6b4685c8f5-str5t 1/1 Running 0 22m 10.244.4.219 k8s5 <none> <none> app=nginx,pod-template-hash=6b4685c8f5
podaffinity-6b4685c8f5-v2cpc 1/1 Running 0 22m 10.244.4.220 k8s5 <none> <none> app=nginx,pod-template-hash=6b4685c8f5
podaffinity-6b4685c8f5-xr6x4 1/1 Running 0 22m 10.244.4.218 k8s5 <none> <none> app=nginx,pod-template-hash=6b4685c8f5
5.4.2 shutdown k8s4 节点
root@k8s4:~# shutdown now
5.4.3 查看Pods状态
root@ubuntu:/home/ubuntu/k8syaml/lst# kubectl get pod -o wide --show-labels | grep nginx2
nginx2-dc58774d-2jfzr 1/1 Running 0 2m50s 10.244.4.240 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-442xm 1/1 Terminating 0 12m 10.244.3.195 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-4jdvh 1/1 Running 0 2m50s 10.244.4.228 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-4zskb 1/1 Terminating 0 12m 10.244.3.203 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-5lxmj 1/1 Running 0 2m50s 10.244.4.231 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-67wtg 1/1 Terminating 0 12m 10.244.3.191 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-6djs2 1/1 Terminating 0 12m 10.244.3.201 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-72t52 1/1 Terminating 0 12m 10.244.3.194 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-7s8n9 1/1 Running 0 2m50s 10.244.4.235 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-bdkvj 1/1 Running 0 2m51s 10.244.4.225 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-br5h8 1/1 Terminating 0 12m 10.244.3.204 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-cxbmm 1/1 Running 0 2m49s 10.244.4.242 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-fmh4x 1/1 Terminating 0 12m 10.244.3.187 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-gwk7w 1/1 Running 0 2m49s 10.244.4.237 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-jf89p 1/1 Running 0 2m51s 10.244.4.232 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-jtdm6 1/1 Running 0 2m50s 10.244.4.227 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-kk8pm 1/1 Running 0 2m49s 10.244.4.239 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-l5lhl 1/1 Terminating 0 12m 10.244.3.193 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-lh468 1/1 Terminating 0 12m 10.244.3.198 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-lm7lp 1/1 Running 0 2m50s 10.244.4.229 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-m9wgb 1/1 Running 0 2m51s 10.244.4.226 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-mb7dt 1/1 Running 0 2m51s 10.244.4.223 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-mkm5j 1/1 Running 0 2m50s 10.244.4.224 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-pdt8z 1/1 Terminating 0 12m 10.244.3.202 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-pmf85 1/1 Running 0 2m49s 10.244.4.238 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-ptfzf 1/1 Terminating 0 12m 10.244.3.192 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-qx4zl 1/1 Running 0 2m49s 10.244.4.233 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-rcmjk 1/1 Running 0 2m50s 10.244.4.241 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-rzkq8 1/1 Terminating 0 12m 10.244.3.185 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-sfsd6 1/1 Running 0 2m50s 10.244.4.236 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-sv4c5 1/1 Terminating 0 12m 10.244.3.189 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-th4v6 1/1 Running 0 2m50s 10.244.4.234 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-v98bv 1/1 Terminating 0 12m 10.244.3.197 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-vbx2v 1/1 Running 0 2m50s 10.244.4.230 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-w4ltn 1/1 Terminating 0 12m 10.244.3.196 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-wc8nv 1/1 Terminating 0 12m 10.244.3.200 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-wdbdf 1/1 Terminating 0 12m 10.244.3.190 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-xphfc 1/1 Terminating 0 12m 10.244.3.188 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-z2kf6 1/1 Terminating 0 12m 10.244.3.199 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-z7p9s 1/1 Terminating 0 12m 10.244.3.186 k8s4 <none> <none> app=nginx3,pod-template-hash=dc58774d
经过调度,Pods会在k8s5节点上running,而由于k8s4此时是shutdown状态,所以原本的Pods无法delete,需要重启k8s4节点,Pods才会进行deleting
5.4.4 重新启动k8s4节点
查看Pods状态
root@ubuntu:/home/ubuntu/k8syaml/lst# kubectl get pod -o wide --show-labels | grep nginx2
nginx2-dc58774d-2jfzr 1/1 Running 0 7m33s 10.244.4.240 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-4jdvh 1/1 Running 0 7m33s 10.244.4.228 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-5lxmj 1/1 Running 0 7m33s 10.244.4.231 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-7s8n9 1/1 Running 0 7m33s 10.244.4.235 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-bdkvj 1/1 Running 0 7m34s 10.244.4.225 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-cxbmm 1/1 Running 0 7m32s 10.244.4.242 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-gwk7w 1/1 Running 0 7m32s 10.244.4.237 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-jf89p 1/1 Running 0 7m34s 10.244.4.232 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-jtdm6 1/1 Running 0 7m33s 10.244.4.227 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-kk8pm 1/1 Running 0 7m32s 10.244.4.239 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-lm7lp 1/1 Running 0 7m33s 10.244.4.229 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-m9wgb 1/1 Running 0 7m34s 10.244.4.226 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-mb7dt 1/1 Running 0 7m34s 10.244.4.223 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-mkm5j 1/1 Running 0 7m33s 10.244.4.224 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-pmf85 1/1 Running 0 7m32s 10.244.4.238 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-qx4zl 1/1 Running 0 7m32s 10.244.4.233 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-rcmjk 1/1 Running 0 7m33s 10.244.4.241 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-sfsd6 1/1 Running 0 7m33s 10.244.4.236 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-th4v6 1/1 Running 0 7m33s 10.244.4.234 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
nginx2-dc58774d-vbx2v 1/1 Running 0 7m33s 10.244.4.230 k8s5 <none> <none> app=nginx3,pod-template-hash=dc58774d
网友评论