Account Log in
Jump to section

What is high availability and disaster recovery for containers?

Copy URL

As organizations move their systems to the hybrid cloud, resilience is often a critical concern. The ability to withstand errors and failures without data loss is key to providing reliable application services that contribute to business continuity. 

Critical applications must also continue to perform well, even under component failure. Applications alone can go only so far in providing resilience, ultimately depending on underlying data services infrastructure for resilience and performance under failure conditions.

 

High availability is protecting infrastructure or applications on a single site to ensure continuous operations. The aim is to reduce single points of failure in a computing stack, generally through redundant access paths and component resiliency. Including high availability concepts in an environment means services have built-in resiliency and can recover on their own. To recover, these services might restart if they fail, allow for a faulted node to be restarted, redeploy a workload on failed hardware in another location in the environment, or resend transactions to the service or a different instance of the service if a network path fails.

High availability is key to ensuring your applications operate without downtime and can handle unforeseen failures. Technologies such as containers, Kubernetes, and serverless present new opportunities in application development but still need a recovery plan in the event of a failure.

Disaster recovery (DR) is the ability to recover and continue business-critical applications from natural or human-created disasters, protecting infrastructure or applications in a geographically distributed manner to reduce business impact as much as possible. It is the overall business continuance strategy of any major organization, designed to preserve the continuity of business operations during major adverse events. The aim is to enable automated or automatic recovery over longer distances than traditional high availability and extend recovery to a different cluster. In environments where an application is restricted to one site at a time, migration between sites may be automated and require an individual with authority to make a decision to move computing services between sites. This is needed when technology requires a cost to resync applications when failover between sites occurs.  Reducing the time it takes to recover from incidents is critical to your organization's success.

Regional DR capability provides volume-persistent data and metadata replication across sites that are geographically dispersed. In the public cloud, these would be akin to protecting from a regional failure. Regional DR ensures business continuity during the unavailability of a geographical region, accepting some loss of data in a predictable amount. This is usually expressed as recovery point objectives (RPO) and recovery time objectives (RTO).

RPO is a measure of how frequently you take backups or snapshots of persistent data. In practice, the RPO indicates the amount of data that will be lost or need to be reentered after an outage.

RTO is the amount of downtime a business can tolerate. The RTO answers the question, "How long can it take for our system to recover after we were notified of a business disruption?"

Check out documentation on configuring Red Hat OpenShift Data Foundation for regional disaster recovery with advanced cluster management.

Red Hat Advanced Cluster Management for Kubernetes

Our ecosystem partners work with us to validate their solutions with our platforms and enable a robust data protection and disaster recovery strategy. Find out more about our network and storage infrastructure partners.

There’s a lot more to do with containers.

Illustration of man and women consulting at a desk

Get started

Red Hat Consulting offers more than just technical expertise. We're strategic advisors who take a big-picture view of your organization, analyze your challenges, and help you overcome them.

Keep reading

Article

Containers vs VMs

Linux containers and virtual machines (VMs) are packaged computing environments that combine various IT components and isolate them from the rest of the system.

Article

What is container orchestration?

Container orchestration automates the deployment, management, scaling, and networking of containers.

Article

What's a Linux container?

A Linux container is a set of processes isolated from the system, running from a distinct image that provides all the files necessary to support the processes.

More about containers

Products

Red Hat OpenShift

An enterprise-ready Kubernetes container platform with full-stack automated operations to manage hybrid cloud, multicloud, and edge deployments.

Resources

Training

Free training course

Running Containers with Red Hat Technical Overview

Free training course

Containers, Kubernetes and Red Hat OpenShift Technical Overview

Free training course

Developing Cloud-Native Applications with Microservices Architectures