In OpenShift testing (and production), it is useful to have pods land on specific sets of nodes for monitoring or isolation purposes. There are discussions on how to do this in a few different locations such as:
https://blog.openshift.com/deploying-applications-to-specific-nodes/
https://docs.openshift.com/enterprise/3.2/admin_guide/managing_projects.html#using-node-selectors
This post will describe a simple practical example where
- The OpenShift router and docker-registry pods will go to a set of infrastructure nodes
- The OpenShift metrics pods will go to a second set of metrics nodes
- Applications will run on the remaining worker nodes
Some of this can be accomplished during the OpenShift V3 install by making use of the openshift_router_selector and openshift_registry_selector Ansible inventory parameters, but I will assume nothing was done during the install and that the cluster is up and running with the normal default (router and registry pods) and openshift-infra (metrics pods) projects.
Step 1: Label your nodes
First we need to label sets of nodes which will run the desired types of OpenShift pods. The overwrite parameter below handles the case where the node already has a region label
For each infrastructure node where the docker-registry or router pods will run: oc label node <nodename> “region=infra” --overwrite
For each metrics node where the metrics pods will run: oc label node <nodename> “region=metrics”
For each application node : oc label node <nodename> “region=primary”
This labelling can also be done in the Ansible inventory at installation time. Example:
[nodes]ip-10-0-0-[1:3].us-west-2.compute.internal openshift_node_labels="{'region': 'infra', 'zone': 'default'}"
ip-10-0-0-[4:6].us-west-2.compute.internal openshift_node_labels="{'region': ‘metrics’, 'zone': 'default'}"
ip-10-0-0-21[0:9].us-west-2.compute.internal openshift_node_labels="{'region': 'primary', 'zone': 'default'}"
Step 2: Set a cluster default selector for normal application pods
To keep everyday applications off of the infra and metrics pods, set a default selector for the cluster:
Edit /etc/origin/master/master-config.yaml and find projectConfig. Set the value: defaultNodeSelector: "region=primary"
systemctl restart atomic-openshift-master (or atomic-openshift-master-api and atomic-openshift-master-controllers for HA clusters)
Step 3: Set node selectors for the special projects
We want pods in the default project to land on the region=infra nodes and pods in the openshift-infra project to land on the region=metrics nodes. To do this, set selectors for these projects which override the cluster-wide default selector:
oc edit namespace default
Add the following annotation with the other annotations and save:
openshift.io/node-selector: region=infra
oc edit namespace openshift-infra
Add the following annotation with the other annotations and save:
openshift.io/node-selector: region=metrics
That should do it. You can get a lot fancier with things like selectors at the deployment configuration level, using combinations of selectors, or the affinity/anti-affinity features of OpenShift. But for many use cases where you need to know the basic set of nodes where a pod will land, this method works just fine.
About the author
More like this
14 software architecture design patterns to know
Getting started with socat, a multipurpose relay tool for Linux
Technically Speaking | Platform engineering for AI agents
Technically Speaking | Driving healthcare discoveries with AI
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Virtualization
The future of enterprise virtualization for your workloads on-premise or across clouds