You've probably heard about the growth of edge computing, but what is edge? And what does it mean- especially for OpenShift admins? By moving workloads to the edge of the network, devices spend less time on the cloud, react faster to local changes, and operate more reliably. But with the opportunities that edge computing brings, some complexities need to be considered as you build out an infrastructure to support these use cases.
So in this episode, we talk about what it's like to work with OpenShift at the edge. That means things like running nodes at remote sites, single-node OpenShift instances, and compact clusters. We'll also take a look at management tools and philosophies. Red Hat Principal Technical Marketing Manager Mark Schmitt joins the stream to help explore how the edge is a very different place from the data center and gives us a unique perspective on OpenShift at the edge.
As always, please see the list below for additional links to specific topics, questions, and supporting materials for the episode!
If you’re interested in more streaming content, please subscribe to the Red Hat livestreaming calendar to see the upcoming episode topics and to receive any schedule changes. If you have questions or topic suggestions for the Ask an OpenShift Admin Office Hour, please contact us via Discord, Twitter, or come join us live, Wednesdays at 11am EDT / 1500 UTC, on YouTube and Twitch.
Episode 42 recorded stream:
Use this link to jump directly to where we start talking about today’s topic.
This week’s top of mind topics:
- Our first topic today is about APIs being deprecated. OpenShift 4.8, which uses Kubernetes 1.21, introduced the ability to see which APIs are being used, including if they’re being removed in the future. If you have Operators, either created by you or other third parties, be sure to update them to use the new API endpoints!
- Did you know that OpenShift consists of a large number of open source projects? Last year we published a blog post that goes into detail on which projects are used for each OpenShift feature.
- Do you have trouble remembering all the fields for install-config.yaml? Do you want to see a definition of each field in the output of an oc or kubectl response? We showed how to use the explain subcommand for oc, kubectl, and openshift-install to get that information, and more, quickly and easily.
- Do you use vRealize Automation (vRA)? Do you want to use it with OpenShift? During the stream we talked about a couple of options for using vRA, including a (defunct) VMware Fling and a recently published blog post by one of the VMware folks that details the process.
- The next topic discussed is resizing control plane nodes. The process of changing the CPU or memory allotment for control plane nodes is different than with compute nodes. The safest way for hyperscaler deployments, like AWS or Azure, is to fail the control plane nodes one at a time, remove them from the cluster, resize, and rejoin. For on-prem deployments, you can power off the nodes one at a time and resize the resources.
- If you are relying on ImageContentSourcePolicy to pull install images from a non-Red Hat registry, and that registry is password-protected, Pod level pull secrets will be ignored. You’ll need to update the cluster’s global pull secret for it to work. The good news is that updating the global pull secret no longer requires a reboot of the nodes!
Questions answered and topics discussed during the stream:
- One of our viewers asked, “what is a non-integrated install?” The short version is that it’s any cluster where the platform is set to none, which means there’s no cloud provider integration with the underlying infrastructure.
- Another viewer asked about a stream where we showed deploying OpenShift 4.7 using libvirt. We weren’t sure which stream they were thinking of, but the OKD folks did show installing using libvirt during their testing and deployment workshop earlier this year. That event included a session (which I co-presented) using libvirt, you can see the details here.
- Mark does a great job of walking us through what edge is, where it applies, and who should be interested in edge solutions starting here.
- Can we run a single node OpenShift deployment on-premises or only at the edge? Single node is targeted at edge use cases, but there’s nothing preventing you from using it anywhere that makes sense for you.
- Would a CodeReady Containers (CRC) deployment be considered a single node OpenShift deployment for edge use cases? No, for a couple of reasons. First, and most importantly, it wouldn’t be supported. CRC is meant for developers to create and test applications locally, not for production workload. Second, there’s not a robust update / upgrade mechanism available for this scenario.
- Mark also shows some potential cluster architectures for edge deployments, including single node, three node, and remote workers.
- A three node cluster is a standard OpenShift deployment which happens to have the control plane marked as schedulable at deployment time. It can be expanded with worker nodes at any time, and you can even - after adding worker nodes - change the control plane to be non-schedulable so it behaves like a standard deployment.
- Can a “standard” deployment, for example vSphere IPI, have the control plane marked as schedulable day 2? Yes, this would work as expected.
- Red Hat Advanced Cluster Management for Kubernetes (RHACM) also plays an important role in edge deployments, particularly around managing clusters at scale. We show and discuss the architecture and capabilities during the stream at the linked time.