Following the “OpenShift on OpenStack: Availability Zones” series, in the first part we introduced OpenStack AZs and presented different OpenShift deployment options regarding AZs.
In this post we will explain the ‘best case scenario’, using OpenShift on OpenStack with multiple Nova AZs and multiple Cinder AZs where the AZ names match.
PART II - Scenario One (Recommended): Multi Nova and Cinder AZs
The following scenario consists of:
-
3 Nova AZs (AZ1, AZ2, AZ3)
-
3 Cinder AZs (AZ1, AZ2, AZ3)
For demonstration, we will use the asb-etcd pod as an example as it is created at installation time. It is a pod that requires a volume to store data, thus creating a PVC. The purpose of this scenario is to show the asb-etcd pod is created in a Nova AZ that has the same name as the Cinder AZ used to create PVCs. If there are no nodes available within the AZ to mount the volume, the pod cannot start.
We begin by investigating the StorageClass created at installation time. It includes information about the provisioner (Cinder) and some other parameters. It looks like this:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: ... parameters: fstype: xfs provisioner: kubernetes.io/cinder
After a successful installation, the pod has been created in the AZ2 and it is running in one of the AZ2 nodes (cicd-node-1):
asb-etcd-1-t8chg 1/1 Running 0 8m 10.130.2.3 cicd-node-1.cicd.com
If I delete the pod, it is rescheduled again to that same node (notice the different pod name):
asb-etcd-1-7ljpf 1/1 Running 0 19s 10.130.2.4 cicd-node-1.cicd.com
If I drain and cordon that node (oc adm drain cicd-node-1.cicd.com), eventually it will be scheduled to another node available in AZ2 (in this case, cicd-infra-1.cicd.com):
asb-etcd-1-lp457 1/1 Running 0 2m 10.128.4.6 cicd-infra-1.cicd.com
If I drain and cordon all the remaining nodes in AZ2 (so 0 nodes in AZ2 scheduleables), the pod won’t be able to be scheduled:
4s 5s 2 asb-etcd-1-s4cqc Pod Warning FailedScheduling default-scheduler 0/4 nodes are available: 4 NoVolumeZoneConflict.
With no nodes available within the AZ2, this leaves the pod in a pending state, as expected, because the asb-etcd pod was only allowed to be scheduled within AZ2.
asb-etcd-1-s4cqc 0/1 Pending 0 41s
Scenario One Conclusion
Pods are scheduled to nodes where the AZ name for nova is the same as the cinder AZ name for the PV they use. If there is no available node in the AZ to mount the volume, the pod won’t be able to start.
But wait… What about when you have multiple Nova availability zones and just one Cinder availability zone? How do I handle the following scenario? The next blog in the series, "multiple Nova AZs with a single Cinder AZ" shifts the focus to answer these questions. It will explain why scenario one (multiple Nova AZs and multiple Cinder AZs) is recommended, but also how to handle environments where multiple Cinder AZs are not available.
저자 소개
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
오리지널 쇼
엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리
제품
- Red Hat Enterprise Linux
- Red Hat OpenShift Enterprise
- Red Hat Ansible Automation Platform
- 클라우드 서비스
- 모든 제품 보기
툴
체험, 구매 & 영업
커뮤니케이션
Red Hat 소개
Red Hat은 Linux, 클라우드, 컨테이너, 쿠버네티스 등을 포함한 글로벌 엔터프라이즈 오픈소스 솔루션 공급업체입니다. Red Hat은 코어 데이터센터에서 네트워크 엣지에 이르기까지 다양한 플랫폼과 환경에서 기업의 업무 편의성을 높여 주는 강화된 기능의 솔루션을 제공합니다.