개요
쿠버네티스 환경에서 Pod를 특정 워커 노드에 배치하는 방법을 정리합니다.
구조도
frontend Appilication은 Worker Node 1,3에 backend Application은 Worker Node 2,4에 배치시키려 합니다.
노드에 Label 추가하기
1. 클러스터의 노드 Label 확인
kubectl get nodes --show-labels
결과
NAME STATUS ROLES AGE VERSION LABELS
ip-10-100-39-3.ap-northeast-3.compute.internal Ready <none> 52m v1.28.5-eks-5e0fdde ...,kubernetes.io/host=worker1
ip-10-100-42-191.ap-northeast-3.compute.internal Ready <none> 52m v1.28.5-eks-5e0fdde ...,kubernetes.io/host=worker2
ip-10-100-46-137.ap-northeast-3.compute.internal Ready <none> 52m v1.28.5-eks-5e0fdde ...,kubernetes.io/host=worker3
ip-10-100-49-146.ap-northeast-3.compute.internal Ready <none> 52m v1.28.5-eks-5e0fdde ...,kubernetes.io/host=worker4
ip-10-100-50-238.ap-northeast-3.compute.internal Ready <none> 52m v1.28.5-eks-5e0fdde ...,kubernetes.io/host=worker5
2. 특정 노드에 Label을 추가합니다. 워커노드 2대씩 `tier=front`, `tier=back`으로 할당합니다.
kubectl label nodes ip-10-100-39-3.ap-northeast-3.compute.internal tier=front
kubectl label nodes ip-10-100-42-191.ap-northeast-3.compute.internal tier=front
kubectl label nodes ip-10-100-46-137.ap-northeast-3.compute.internal tier=back
kubectl label nodes ip-10-100-49-146.ap-northeast-3.compute.internal tier=back
3. 워커 노드에 Label이 잘 적용 되었는지 확인합니다.
kubectl get nodes --show-labels
결과
NAME STATUS ROLES AGE VERSION LABELS
ip-10-100-39-3.ap-northeast-3.compute.internal Ready <none> 52m v1.28.5-eks-5e0fdde ...,tier=front
ip-10-100-42-191.ap-northeast-3.compute.internal Ready <none> 52m v1.28.5-eks-5e0fdde ...,tier=front
ip-10-100-46-137.ap-northeast-3.compute.internal Ready <none> 52m v1.28.5-eks-5e0fdde ...,tier=back
ip-10-100-49-146.ap-northeast-3.compute.internal Ready <none> 52m v1.28.5-eks-5e0fdde ...,tier=back
ip-10-100-50-238.ap-northeast-3.compute.internal Ready <none> 52m v1.28.5-eks-5e0fdde ...,kubernetes.io/host=worker5
Node Affinity
Required Affinity
Required Affinity는 Pod가 특정 Label이 있는 노드에만 스케줄되어야 할 때 사용합니다. 아래는 총 2개의 Statefulset을 배포하는 Yaml파일 이고, front Pod는 `requiredDuringSchedulingIgnoredDuringExecution` 옵션을 통해 nginx-front Statefulset은 `tier=front` 라벨이 적용된 워커노드에만 배포하고 , nginx-back Statefulset은 `tier=back` 라벨이 적용된 워커노드에만 배포하게 됩니다.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-front
spec:
replicas: 2
serviceName: nginx-front
selector:
matchLabels:
app: nginx-front
template:
metadata:
labels:
app: nginx-front
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: tier
operator: In
values:
- front
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-back
spec:
replicas: 2
serviceName: nginx-back
selector:
matchLabels:
app: nginx-back
template:
metadata:
labels:
app: nginx-back
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: tier
operator: In
values:
- back
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
위 yaml 파일을 배포하면 아래와 같이 front Statefulset은 `tier=front` Label을 가지고 있는 워커노드, back Statefulset은 `tier=back` Label을 가지고 있는 워커노드에 배치된 것을 확인할 수 있습니다.
배포
kubectl apply -f Nodeaffinity.yaml
확인
kubectl get pod -o wide
결과
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-back-0 1/1 Running 0 19s 10.100.32.197 ip-10-100-46-137.ap-northeast-3.compute.internal <none> <none>
nginx-back-1 1/1 Running 0 18s 10.100.61.48 ip-10-100-49-146.ap-northeast-3.compute.internal <none> <none>
nginx-front-0 1/1 Running 0 19s 10.100.33.132 ip-10-100-39-3.ap-northeast-3.compute.internal <none> <none>
nginx-front-1 1/1 Running 0 18s 10.100.32.87 ip-10-100-42-191.ap-northeast-3.compute.internal <none> <none>
Preferred Affinity
Preferred Affinity는 Pod를 배포할 때 특정 Label이 있는 노드를 선호한다는 것을 의미합니다.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-front
spec:
replicas: 2
serviceName: nginx-front
selector:
matchLabels:
app: nginx-front
template:
metadata:
labels:
app: nginx-front
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: tier
operator: In
values:
- front
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-back
spec:
replicas: 2
serviceName: nginx-back
selector:
matchLabels:
app: nginx-back
template:
metadata:
labels:
app: nginx-back
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: tier
operator: In
values:
- back
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
위 yaml 파일을 배포하면 아래와 같이 front Statefulset은 `tier=front` Label을 가지고 있는 워커노드, back Statefulset은 `tier=back` Label을 가지고 있는 워커노드에 배치된 것을 확인할 수 있습니다.
배포
kubectl apply -f Nodeaffinity_preferrd.yaml
확인
kubectl get pod -o wide
결과
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-back-0 1/1 Running 0 30s 10.100.45.24 ip-10-100-46-137.ap-northeast-3.compute.internal <none> <none>
nginx-back-1 1/1 Running 0 29s 10.100.52.149 ip-10-100-49-146.ap-northeast-3.compute.internal <none> <none>
nginx-front-0 1/1 Running 0 30s 10.100.34.117 ip-10-100-39-3.ap-northeast-3.compute.internal <none> <none>
nginx-front-1 1/1 Running 0 30s 10.100.39.16 ip-10-100-42-191.ap-northeast-3.compute.internal <none> <none>
Required Affinity VS Preferred Affinity
Required와 Prefferd의 핵심적인 차이는 필수(강력), 선호(유연) 입니다. 워커노드에 더이상 Pod를 할당할 수 없는 상황이 왔을 때 Required는 해당 Label이 없는 워커노드에 배치가 안되지만 Prefferd의 경우 Label이 없는 워커노드에도 배치가 가능합니다.
Required 방식 |
아래는 Reqired 방식으로 Replicas를 늘렸을 때의 결과 입니다. 두 대의 워커노드에 더이상 Pod를 배치할 수 없는 상황이 오면, Pod는 Pending 상태로 멈춰있습니다.
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-back-0 1/1 Running 0 7m54s 10.100.37.243 ip-10-100-46-137.ap-northeast-3.compute.internal <none> <none>
...
nginx-back-29 0/1 Pending 0 6m44s <none> <none> <none> <none>
nginx-front-0 1/1 Running 0 7m54s 10.100.33.132 ip-10-100-39-3.ap-northeast-3.compute.internal <none> <none>
...
nginx-front-29 0/1 Pending 0 7m10s <none> <none> <none> <none>
Preferred 방식 |
반면 Preferred 방식은 Replicas를 늘려 두 대의 워커노드에 더이상 Pod를 배치할 수 없는 상황이 왔을 때, 특정 Label이 없는 워커노드에도 정상적으로 배치되는 모습을 확인할 수 있습니다.
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-back-0 1/1 Running 0 89s 10.100.37.243 ip-10-100-46-137.ap-northeast-3.compute.internal <none> <none>
...
nginx-back-34 1/1 Running 0 35s 10.100.54.249 ip-10-100-50-238.ap-northeast-3.compute.internal <none> <none>
nginx-front-0 1/1 Running 0 89s 10.100.33.132 ip-10-100-39-3.ap-northeast-3.compute.internal <none> <none>
...
nginx-front-34 1/1 Running 0 17s 10.100.55.198 ip-10-100-50-238.ap-northeast-3.compute.internal <none> <none>
참고
'클라우드' 카테고리의 다른 글
[cks][killershell] Apiserver Crash 실습 (0) | 2024.07.29 |
---|---|
[EKS] IaC (0) | 2024.04.27 |
[EKS] CI/CD (0) | 2024.04.16 |
[EKS] Autoscaling (0) | 2024.04.03 |
[EKS] Storage & NodeGroup (0) | 2024.03.23 |