You've probably heard about the growth of edge computing, but what is edge? And what does it mean- especially for OpenShift admins? By moving workloads to the edge of the network, devices spend less time on the cloud, react faster to local changes, and operate more reliably. But with the opportunities that edge computing brings, some complexities need to be considered as you build out an infrastructure to support these use cases.

So in this episode, we talk about what it's like to work with OpenShift at the edge. That means things like running nodes at remote sites, single-node OpenShift instances, and compact clusters. We'll also take a look at management tools and philosophies. Red Hat Principal Technical Marketing Manager Mark Schmitt joins the stream to help explore how the edge is a very different place from the data center and gives us a unique perspective on OpenShift at the edge.

During this stream we used some slides to help illustrate architectures and capabilities. You can find those slides on our SpeakerDeck, here.

As always, please see the list below for additional links to specific topics, questions, and supporting materials for the episode!

If you’re interested in more streaming content, please subscribe to the Red Hat livestreaming calendar to see the upcoming episode topics and to receive any schedule changes. If you have questions or topic suggestions for the Ask an OpenShift Admin Office Hour, please contact us via Discord, Twitter, or come join us live, Wednesdays at 11am EDT / 1500 UTC, on YouTube and Twitch.

Episode 42 recorded stream:



Use this link to jump directly to where we start talking about today’s topic. 

This week’s top of mind topics:

  • Our first topic today is about APIs being deprecated. OpenShift 4.8, which uses Kubernetes 1.21, introduced the ability to see which APIs are being used, including if they’re being removed in the future. If you have Operators, either created by you or other third parties, be sure to update them to use the new API endpoints!
  • Did you know that OpenShift consists of a large number of open source projects? Last year we published a blog post that goes into detail on which projects are used for each OpenShift feature.
  • Do you have trouble remembering all the fields for install-config.yaml? Do you want to see a definition of each field in the output of an oc or kubectl response? We showed how to use the explain subcommand for oc, kubectl, and openshift-install to get that information, and more, quickly and easily.
  • Do you use vRealize Automation (vRA)? Do you want to use it with OpenShift? During the stream we talked about a couple of options for using vRA, including a (defunct) VMware Fling and a recently published blog post by one of the VMware folks that details the process.
  • The next topic discussed is resizing control plane nodes. The process of changing the CPU or memory allotment for control plane nodes is different than with compute nodes. The safest way for hyperscaler deployments, like AWS or Azure, is to fail the control plane nodes one at a time, remove them from the cluster, resize, and rejoin. For on-prem deployments, you can power off the nodes one at a time and resize the resources.
  • If you are relying on ImageContentSourcePolicy to pull install images from a non-Red Hat registry, and that registry is password-protected, Pod level pull secrets will be ignored. You’ll need to update the cluster’s global pull secret for it to work. The good news is that updating the global pull secret no longer requires a reboot of the nodes!

Questions answered and topics discussed during the stream: