Riding a motorcycle and running a software cloud have similar traditional wisdom: It’s not about avoiding an accident, it’s about preparing for them and minimizing their impact. On a motorcycle, your vehicle is so small compared to the others on the road, that it is sensible to wear a helmet, pads and gloves: the thinking is that you cannot permanently avoid crashes, but you can be prepared for the few you are likely to have in your time with the bike.
The same can be said of any computing endeavor: somehow, somewhere, something is going to break and crash a server, a container or a node. In the best of systems, this may only happen once a year for a few minutes. In the worst case, it can result in the end of a business, or the bankruptcy of a company.
But there are helmets and body armor for computing. Until now, however, OpenShift users were wearing the equivalent of the armor knights and samurai wore into battle over a decade ago: powerful, perhaps even overengineered for the task of riding through the city and highway. Today, we announce our sleek, color-coordinated motorcycle gear in the form of OpenShift Disaster Recovery capabilities.
Disaster Recovery for OpenShift workloads
Disaster Recovery in Kubernetes is not simply about replacing dead nodes: killing and replacing nodes is a basic part of everyday business for cloud enabled applications. Instead, it’s about getting systems back up and running at scale, and at speed. While Disaster Recovery for OpenShift workloads includes all the integrations and tools needed to recover from most traditional outage scenarios, it’s the container-focused capabilities that truly take on the next generation of challenges.
The features used with Disaster Recovery for OpenShift workloads are woven into the very fabric of OpenShift itself, including Red Hat OpenShift Advanced Cluster Management, Red Hat OpenShift Data Foundation, and Red Hat Ceph Storage. With these bundled together in OpenShift Platform plus, Disaster Recovery for OpenShift workloads can handle the nodes, the cluster, the object stores, and the interceding cluster connection bits that serve workloads.
That means OpenShift is now prepared to help you remedy outages ranging from the regional, to the metropolitan to the local. And instead of reprovisioning every node from a traditional recovery cluster, OpenShift Disaster Recovery also stores information around all the services and Kubernetes APIs that were in place before the outage. That means saving all relevant and required data to be able to reproduce workloads including namespace specifics on either the original cluster (in-place restore) or on a replacement cluster (out-of-place restore)
Safety in Numbers
3 methodologies that support business continuity with disaster recovery functionality
- OADP: OpenShift API for Data Protection. An API that allows customers to enable existing backup- and data recovery applications to interact with OpenShift workloads in order to have a complete workload backup that can effectively be restored into the same- or into another cluster.
- Regional DR (TP): RHACM controlled automated protection for block volumes, asynchronous replication. Protection of business functionalities when a disaster strikes at a geographical location.
- Metropolitan DR (TP): RHACM controlled protection against data loss while using multiple clusters, synchronous replication. This enables instant protection of business functionalities, with a near zero recovery point objective (RPO).
About the author
Marcel Hergaarden is a Product Marketing Manager within the Data Foundation Business team. Hergaarden has been with Red Hat since 2012, and with the Data Foundation team since 2019. He has a technical background and has extensive experience in infrastructure-related technical sales roles.