Infrastructure teams managing Red Hat OpenShift often ask me how to effectively onboard applications in production. OpenShift embeds many functionalities in a single product and it is fair to imagine an OpenShift administrator struggling to figure out what sort of conversations his team must have with an application team before successfully running an application on OpenShift.
In this article, I suggest a few topics that administrators could use to actively engage with fellow application teams for the onboarding process. I have had several conversations with customers on these topics and observed that suggested approach has really helped them. By no means are these topics exhaustive, but they are sufficient to kick start the necessary and relevant conversations. Over time, I expect administrators to have larger conversations with application teams in application onboarding.
1. Application Resource Requirements
Administrators need to make sure that OpenShift clusters are sized correctly in order to host applications. Therefore, application resource requirements are a key topic for discussion. There are a few important controls that an administrator must consider when a new application is onboarded.
Administrators should clearly define quotas for the team in order to control the resources their application can consume. This is an app deployment time requirement. Secondly, an administrator must make sure that application teams have defined minimum resource requirements for their application pods. This specification will help the OpenShift scheduler to find the right nodes to run application pods. This is an app schedule time requirement, and it is governed by the Requests functionality in OpenShift.
Lastly, administrators have to make sure that application pods do not consume more than their allocated resources, otherwise this could cause performance issues in other application pods. Administrators should make sure that application teams have defined Limits for their application pods. This is an app runtime requirement.
Administrators should also have a conversation about storage requirements (storage type, size, etc.) for their application. Administrators can then create appropriate persistent volumes, referred to in deployment configuration files as persistent volume claims.
The table below highlights questions that an administrator should ask an application team, administrator action items and links to documentation and blogs -
Questions to Ask | OpenShift Administrator’s Action Items | Helpful links |
How many instances/pods to spin up for the application? |
|
Deployment Configuration |
How many milicores/memory/PV needed per instance? |
|
Limits |
What is the size of each PV? | Create necessary PVs, verify PV names in deployment resource files | Using Persistent Volumes |
2. Container Image Creation and Tagging Process
OpenShift automates the process of creating a container image from source code or compiled binaries via Source-to-Image (s2i) functionality. Not all organizations use this functionality and few might have their own process (outside of OpenShift) to create and manage container images.
In cases where an organization is indeed using s2i, an administrator needs to provide a base image(s) per app runtime. Red Hat recently released Red Hat Universal Base Image (UBI) for RHEL version 7 and 8. With the release of the UBI, administrators can now take advantage of the greater reliability, security, and performance of official Red Hat container images where OCI-compliant Linux containers run. An administrator must use this UBI to build a specific app runtime (like java, node.js etc) and expose to s2i process. s2i process will use the base image to create the final container image that includes the application code. New versions of these base images will be periodically uploaded by administrators into the container registry.
Administrators must create an image tagging strategy and make sure application teams are aware of this. Through proper image tagging, administrators can really streamline the image upgrade process. Otherwise, the build process (usually defined by application team) may pick up a new version of the base image when it wasn’t intended, or may pick up an old version of the base image when it was supposed to pick up the new one.
Discussion on following topics would align both Administrator and application team in image build and tag strategy -
Questions to Ask | OpenShift Administrator's Action Items | Helpful links |
Is source to image (s2i) used to build container image? | ||
|
I recommend that an image must have a major and minor version represented in the tag. App team should make sure they refer to the latest minor version of a specific major version. |
Source to Image Build Process |
|
|
ImageStreams |
3. Application Health Checks
OpenShift exposes ways to monitor an application’s health and then take desired action. Readiness and Liveness probes are well documented application health checks. Administrators should make sure that developers have defined relevant health checks for their applications. Furthermore, administrators should make an attempt to understand what a particular health check means. This becomes immensely helpful for administrators to dig in more while resolving issues when OpenShift should restart pods in production.
Another conversation to have with developers on this topic is whether developers have coded sufficiently to take care of graceful shutdown. Administrators may set up automated scale out upon increased load which serves well during scale-up. When scale-down happens, OpenShift needs to delete the pods. OpenShift sends TERM signal to pods before it deletes them. It is critical that the signal is appropriately captured by application pods and an appropriate time limit is set for active transactions within a pod to complete before terminated by OpenShift. Red Hat Jboss EAP middleware on OpenShift handles this out of the box, however if developers are using any other middleware or application framework, they must take care of handling signals sent out by OpenShift for graceful shutdowns.
The table below highlights various discussion points and administrator’s action items (with links to useful documentation/blog) -
Questions to Ask | OpenShift Administrator's Action Items | Helpful links |
Have developers defined Liveness and Readiness probes? | • Check for these probes in deployment resource files • If not defined, ask for the reason |
Health Check |
Have developers coded for graceful shutdown? | • If no, ask them to code (otherwise scaling down a pod could terminate in-place ongoing transactions) | Graceful Termination |
4. Application Communication
It is not necessary for all applications deployed in OpenShift to be exposed to the outside world. It is good to determine early on which applications would need to be exposed so that administrators can guide application teams to the right hostname for their application. Having a conversation with the application teams prevents a problem on application route’s hostname being already taken by some other team (as the OpenShift cluster could be shared by many teams and it becomes a challenge to keep track of active unique route hostname in the cluster).
Secondly, SSL offloading for an application could happen at multiple places. OpenShift supports 3 ways of SSL termination -
- Edge Termination - Incoming SSL traffic incoming to OpenShift in decrypted at OpenShift’s router component
- Passthrough Termination - Incoming SSL traffic is not decrypted at router, but decrypted by application pods
- Reencryption Termination - Incoming SSL traffic is decrypted at router, and then router re-encrypts the traffic with another certificate provided by application team
As you could infer from three strategies above which developers could choose, administrators would require different settings and sharing of certificates. This necessitates a conversation between developers and administrators.
Lastly, there may be egress and ingress requirements for security purposes. For example, an application database hosted outside of OpenShift is firewalled by set of IP addresses. In these cases, a dialogue between administrator and developers is needed for egress.
The table below highlights questions that an administrator should ask an application team, administrator action items and links to documentation and blogs-
Questions to Ask | OpenShift Administrator's Action Items | Helpful links |
Is route required to expose this microservice outside of OCP? | ||
• What would be the route hostname? | Check route definition in deployment resource files and make sure it is unique | |
• Where is ssl terminated? | Discuss which termination place is demanded by developer team, why such requirement and work forward | Routes Termination |
Is egress needed (or the application needs to connect to external service, like database, which is protected by IP level firewall) for the application? | Discuss with developer team and understand the need for egress | Controlling Egress Traffic |
Is there a specific requirement on ingress? | Discuss with developer team and understand the need for ingress | Controlling Ingress Traffic |
5. Application Access Controls
OpenShift cluster provides multi-tenancy, allowing numerous applications owned by multiple teams to be hosted on the same infrastructure. In such situations, it is critical to maintain application-to-application access controls. OpenShift provides multi-tenancy or network policy plugin that gives administrators capabilities to control application-to-application traffic. Therefore, administrators should have a conversation with developer teams on their applications architecture and understand which application-to-application communication (within OpenShift) is allowed. Based on that information, administrator should update necessary policy files in OpenShift.
The table below highlights questions that an administrator should ask an application team, administrator action items and links to documentation and blogs -
Question to Ask | OpenShift Administrator's Action Items | Helpful links |
What are the names of existing microservices that this microservice accesses? | Define relevant network policy yaml file | Network Policy |
What are the names of existing microservices who will call this microservice? | Define/update existing relevant network policy yaml file |
Over time, administrators get experienced in managing OpenShift and they can then drive a variety of conversations (like access controls, service mesh, service catalog, credentials management, router sharding, dedicated worker nodes for specific apps, app upgrade process etc) with the application teams. The list goes on, however the topics discussed here are essential and necessary. You can also download (XL format) consolidated topics and manage/enhance it while working with application teams.
About the author
Red Hatter since 2018, technology historian and founder of The Museum of Art and Digital Entertainment. Two decades of journalism mixed with technology expertise, storytelling and oodles of computing experience from inception to ewaste recycling. I have taught or had my work used in classes at USF, SFSU, AAU, UC Law Hastings and Harvard Law.
I have worked with the EFF, Stanford, MIT, and Archive.org to brief the US Copyright Office and change US copyright law. We won multiple exemptions to the DMCA, accepted and implemented by the Librarian of Congress. My writings have appeared in Wired, Bloomberg, Make Magazine, SD Times, The Austin American Statesman, The Atlanta Journal Constitution and many other outlets.
I have been written about by the Wall Street Journal, The Washington Post, Wired and The Atlantic. I have been called "The Gertrude Stein of Video Games," an honor I accept, as I live less than a mile from her childhood home in Oakland, CA. I was project lead on the first successful institutional preservation and rebooting of the first massively multiplayer game, Habitat, for the C64, from 1986: https://neohabitat.org . I've consulted and collaborated with the NY MOMA, the Oakland Museum of California, Cisco, Semtech, Twilio, Game Developers Conference, NGNX, the Anti-Defamation League, the Library of Congress and the Oakland Public Library System on projects, contracts, and exhibitions.
More like this
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit