Resources

Whitepaper

Deploying a highly available OpenStack cloud whitepaper

INTRODUCTION

The second in a series from Red Hat® Consulting, this whitepaper examines the resiliency of cloud architectures and provides detailed best practices for setting up and configuring a highly available OpenStack deployment. 

CREATING A HIGHLY AVAILABLE OPENSTACK ENVIRONMENT 

A general strategy for creating a highly available OpenStack cloud environment is to use horizontal scaling to deploy clustered services wherever possible. Nonscalable resources should be protected with automated failure detection, shared resource recovery, and restart capabilities. Within this strategy, there are several concepts and components that must be addressed to provide reliability. 

HORIZONTAL SCALING 

A basic premise in OpenStack architecture is horizontal scaling. As your environment grows, you simply add more nodes for a particular service to accommodate increases in users and demand. This clustered architecture eliminates single points of failure and increases the availability and reliability OpenStack services. 

HAPROXY LOAD BALANCER 

Horizontally scaled services must be load balanced for optimal performance. Load balancing of layer 7 (application layer) requests between clustered service nodes is accomplished using the HAProxy OpenStack service. For increased automation and reliability, use the Foreman OpenStack installer module to deploy load-balanced services via HAProxy. 

PACEMAKER AND COROSYNC 

Stateful OpenStack application programming interface (API) services cannot be horizontally scaled and must rely on another mechanism for resilient operation. The Pacemaker and Corosync administration interfaces provide highly available resource management with an open foundation for stateful services. Pacemaker manages resources between service nodes and detects and recovers application and virtual machines in the event of a disruption. Corosync enables communication between the clustered nodes.

SERVICE START ORDER AND DEPENDENCIES 

Many OpenStack services are dependent on other services for operation. Services must brought online in the correct order without errors for high-availability operation. Figure 1 shows the order in which highly available services should be started.

image container Figure 1. OpenStack service dependencies and recommended start order

 

RECOMMENDED CONFIGURATIONS FOR HIGH AVAILABILITY 

A conceptual view of a highly available OpenStack deployment is shown in Figure 2. Recommended configurations for critical components, including stateful and stateless core services, database, message infrastructure, load balancer, and network topology are detailed in the following sections. 

STATELESS CORE SERVICES 

RESTful, stateless OpenStack services are horizontally scalable. These include nova-api, novaconductor, glance-api, keystone-api, neutron-api, and nova-scheduler. For highly available operation, deploy a load-balanced cluster of at least two nodes for each RESTful OpenStack service.

NON-REST CORE SERVICES 

Stateful, or non-REST, OpenStack core services are not horizontally scalable. Stateful OpenStack services include the Neutron L3 and Dynamic Host Configuration Protocol (DHCP) agents, novascheduler, nova-conductor, Swift proxy and storage nodes, Keystone identity services, Cinder and Glance storage services, Heat orchestration engine, and Ceilometer telemetry services. If a nonREST service is disrupted, its state must be preserved. For highly available operation, deploy at least two nodes of each stateful service and use Pacemaker with Corosync to provide automated failover between nodes. Pacemaker also performs ongoing health checks on each node and automatically attempts to restart failed nodes.

image container Figure 2. Conceptual layout of a highly available OpenStack environment

 

DATABASE 

OpenStack services and applications receive data from an underlying database infrastructure. If the database is unavailable, the services and applications that depend on it go down and business suffers. For high-availability database operations, deploy a cluster of three databases to achieve quorum in an active-active configuration. Use HAProxy to load balance requests for data across the three-database cluster for improved performance and failover time. MariaDB is recommended for high-availability cloud databases due to its active open source community, reduced legal constraints, enhanced performance, and fast security patching. And, for the highest availability, use storage systems with built-in high-availability features for back-end database storage.

MESSAGE INFRASTRUCTURE 

The message infrastructure provides critical communication between OpenStack services and resources. To build a highly available message infrastructure, use Qpid Advanced Message Queuing Protocol (AMQP) brokers in an active-passive configuration with a virtual IP address (VIP) monitored by Pacemaker. Compared to RabbitMQ, the Qpid AMQP implementation scales more linearly and provides better performance under load, so your OpenStack environment can easily expand without impacting performance. Pacemaker provides automated failover and recovery of failed message brokers. This configuration reduces the mean time to recovery (MTTR) for failed Qpid brokers but does not provide message durability. 

For OpenStack deployments where message durability is required, deploy clustered Qpid brokers and replicate all messages between all brokers using Corosync. In this configuration, setup is more complex and scalability is limited to a maximum of 16 brokers. 

LOAD BALANCER 

While HAProxy provides highly available operation for load-balanced OpenStack services, it is itself a single point of failure. For highly available load-balancing services, deploy at least 2, and up to 100, physical or virtual HAProxy nodes in a scale-out configuration. Use Pacemaker to provide virtual IP failover between the nodes. For more streamlined administration, consider using the Load-Balancing-as-a-Service plugin layer with the Heat orchestration engine. Pacemaker can also provide automated failover for highly available Load-Balancing-as-a-Service deployments. 

NETWORKING AND INFRASTRUCTURE TOPOLOGY 

The networking and infrastructure topology affects the availability of the overall OpenStack environment as much as the availability of the individual components. To build a highly available OpenStack infrastructure, segregate your internal networking by function: management, storage, and tenant. The management network connects core OpenStack API service with the Horizon web dashboard for service node management. The storage network provides separate connectivity between the Swift proxy and storage nodes to ensure that storage traffic does not interfere with tenant traffic. Finally, one or more tenant networks connect OpenStack instances with compute nodes. Use OpenStack Networking (Neutron), Open vSwitch (OVS), and Provider Network Extensions to map OpenStack networks directly to physical networks in the datacenter. This method supports local, virtual local area network (VLAN), and Generic Routing Encapsulation (GRE) deployment models.

HIGHLY AVAILABLE OPENSTACK WITH RED HAT

Red Hat delivers OpenStack with the commercially hardened and certified Red Hat OpenStack Platform. Red Hat also provides consulting and training services that help you build your OpenStackbased cloud faster and operate more efficiently. RED HAT 

RED HAT OPENSTACK PLATFORM 

Through extensive testing and validation, Red Hat OpenStack Platform combines stability, reliability, support, and security with community-based innovation. Tight integration with Red Hat Enterprise Linux® increases resiliency and availability at both the operating system and cloud levels. Red Hat OpenStack Platform also incorporates tested, hardened, and fully supported versions of the OpenStack tools you need for highly available operation, including Pacemaker and Corosync. And with Red Hat Enterprise Linux Network Load Balancer Add-On, you can streamline HAProxy deployment and configuration through built-in Puppet modules.

RED HAT CONSULTING SERVICES 

Red Hat can help you build your OpenStack environment quickly and cost-effectively so you can take advantage of the benefits of the cloud faster. Consulting services include infrastructure assessment, cloud planning, installation, testing, and mentoring. 

RED HAT TRAINING AND CERTIFICATION 

The Red Hat OpenStack Administration training course (CL210) teaches system administrators how to build a cloud using Red Hat OpenStack Platform and prepares them for the Red Hat Certified OpenStack Administrator Exam (EX210). Topics are explored through hands-on labs and include installation, configuration, and maintenance. Successful completion of the training course and exam earns candidates the Red Hat Certified System Administrator in Red Hat OpenStack certification.

CONCLUSION 

OpenStack is an essential tool for CSPs that need to deliver innovative new services quickly. By taking into account the availability of both the components and topology of your OpenStack infrastructure, you can build a cloud environment that is stable and reliable. Red Hat OpenStack Platform gives you a proven, supported cloud platform with the tools you need for highly available operation. And Red Hat’s consulting and training services can help you streamline deployment and take full advantage of your OpenStack investment. Contact your Red Hat sales representative to learn more about building a highly available, production-grade OpenStack environment with Red Hat.