In OpenShift testing (and production), it is useful to have pods land on specific sets of nodes for monitoring or isolation purposes. There are discussions on how to do this in a few different locations such as:
https://blog.openshift.com/deploying-applications-to-specific-nodes/
https://docs.openshift.com/enterprise/3.2/admin_guide/managing_projects.html#using-node-selectors
This post will describe a simple practical example where
- The OpenShift router and docker-registry pods will go to a set of infrastructure nodes
- The OpenShift metrics pods will go to a second set of metrics nodes
- Applications will run on the remaining worker nodes
Some of this can be accomplished during the OpenShift V3 install by making use of the openshift_router_selector and openshift_registry_selector Ansible inventory parameters, but I will assume nothing was done during the install and that the cluster is up and running with the normal default (router and registry pods) and openshift-infra (metrics pods) projects.
Step 1: Label your nodes
First we need to label sets of nodes which will run the desired types of OpenShift pods. The overwrite parameter below handles the case where the node already has a region label
For each infrastructure node where the docker-registry or router pods will run: oc label node <nodename> “region=infra” --overwrite
For each metrics node where the metrics pods will run: oc label node <nodename> “region=metrics”
For each application node : oc label node <nodename> “region=primary”
This labelling can also be done in the Ansible inventory at installation time. Example:
[nodes]ip-10-0-0-[1:3].us-west-2.compute.internal openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
ip-10-0-0-[4:6].us-west-2.compute.internal openshift_node_labels="{'region': ‘metrics’, 'zone': 'default'}"
ip-10-0-0-21[0:9].us-west-2.compute.internal openshift_node_labels="{'region': 'primary', 'zone': 'default'}"
Step 2: Set a cluster default selector for normal application pods
To keep everyday applications off of the infra and metrics pods, set a default selector for the cluster:
Edit /etc/origin/master/master-config.yaml and find projectConfig. Set the value: defaultNodeSelector: "region=primary"
systemctl restart atomic-openshift-master (or atomic-openshift-master-api and atomic-openshift-master-controllers for HA clusters)
Step 3: Set node selectors for the special projects
We want pods in the default project to land on the region=infra nodes and pods in the openshift-infra project to land on the region=metrics nodes. To do this, set selectors for these projects which override the cluster-wide default selector:
oc edit namespace default
Add the following annotation with the other annotations and save:
openshift.io/node-selector: region=infra
oc edit namespace openshift-infra
Add the following annotation with the other annotations and save:
openshift.io/node-selector: region=metrics
That should do it. You can get a lot fancier with things like selectors at the deployment configuration level, using combinations of selectors, or the affinity/anti-affinity features of OpenShift. But for many use cases where you need to know the basic set of nodes where a pod will land, this method works just fine.
저자 소개
유사한 검색 결과
Data-driven automation with Red Hat Ansible Automation Platform
Ford's keyless strategy for managing 200+ Red Hat OpenShift clusters
Technically Speaking | Platform engineering for AI agents
Technically Speaking | Driving healthcare discoveries with AI
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
가상화
온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래