Red Hat Blog
Editor’s note - This post is the second in a four-part series on private cloud from Red Hat technology evangelist Gordon Haff. Find the first post here: A private cloud is a part of a hybrid cloud
OpenStack is a combination of open source projects that use pooled virtual resources to build and manage private and public clouds. Six of these projects handle the core cloud computing services of compute, networking, storage, identity, and image services, while more than a dozen optional projects can be bundled together to create unique, deployable clouds. OpenStack is, therefore, best thought of as a framework rather than as a single monolithic project.
For example, Ceph provides OpenStack with a distributed object store, network block device, and file system. This infrastructure is exposed to users as abstracted pools of resources. Under normal circumstances, users don’t need to know any of the underlying details other than the type of resources they request.
Co-engineered with Linux
At the core of a private cloud, such as OpenStack, is a stable operating system which provides access to hardware resources, system-wide performance, greater security, and scalability. In fact, at Red Hat we co-engineer Red Hat OpenStack Platform and Red Hat Enterprise Linux together with closely aligned product teams. This provides the foundation for reliable and tested software-defined infrastructure that’s the core of a private cloud. An operating system layer such as Red Hat Enterprise Linux also provides independence and portability across public, private, and hybrid environments.
We’ll see Linux playing an important role again with containers in the next post in this series. Linux is right in the thick of things in an abstracted cloud world. It has features that help make cloud-native platforms and applications run more efficiently and safely. And it provides predictability and stability across complex distributed environments.
Both virtualization and private cloud platforms have evolved since “cloud” first put in an appearance. Many virtualization products have added features initially associated with clouds such as self-service and automation. Red Hat Virtualization has specifically added certain OpenStack software-defined services such as those for networking and storage.
At the same time, OpenStack has added features that initially weren’t seen as needed for cloud-native workloads such as the Live Migration of running virtual machine instances from one hardware node to another.
As a result, there’s increased overlap in capabilities between virtualization and private cloud. This can make the transition to cloud-enabled workloads easier and allow organizations to deploy both traditional and cloud-enabled workloads on common and shared services such as the KVM hypervisor. Given the common hypervisor, as well as the same virtual machine image format, a set of common templates or images can also be used across both environments. This structure can reduce complexity and increase efficiency in a hybrid cloud environment by decreasing the number of templates that have to be maintained.
Red Hat CloudForms also provides a common management interface over Red Hat Virtualization and Red Hat OpenStack Platform (as well as other virtualization and cloud platforms).
OpenStack is nonetheless primarily focused on cloud-native workload deployments and the needs of organizations for which a flexible, dynamic on-premise platform for those workloads is a key part of their strategy. The platform’s technology and software design choices optimize for large-scale, dynamic, and scale-out workloads. It’s also more attuned to the higher level of resource abstraction that’s familiar from public clouds. Think in terms of deploying many, relatively small and short-lived resources. Not long-lived big virtual servers.
(OpenStack is also used by public cloud providers and for Network Function Virtualization (NFV) but I’m focusing on its use for private clouds here.)
What are cloud-native workloads?
There have been various efforts to define cloud-native applications. Describing them in terms of twelve factors is one. Metaphorically linking them to disposable “cattle” in contrast to legacy app “pets” that are lovingly maintained and nursed back to health if they get sick is another. The latest is to link them to decoupled application concepts such as microservices.
All of these become a bit of a caricature if over-applied or otherwise taken too literally. (Or worse, they force you into making design decisions that aren’t appropriate to solve a given business requirement.) Metaphors and models simplify and abstract a messy real world down to especially relevant or important points. But over time, these simplifications can come to be seen as too simple or not adequately capturing essential aspects of reality.
That said, cloud-native applications do follow certain patterns that can make them better fits for infrastructure like OpenStack and for container platforms like OpenShift.
For example, many of the components/instances of a cloud-native application should be designed so that they are stateless. That is, they should use ephemeral storage—which is to say storage and data only sticks around for the life of the instance itself. But most cloud-native applications also require there to be persistent storage somewhere. One can just assume that it’s provided through some sort of backing service, perhaps on a legacy system running somewhere else. However, as cloud-native designs become more common, there’s increasingly a need to provide persistence mechanisms within the private cloud itself.
Another important aspect of cloud-native applications is the manner in which they scale. They mostly scale horizontally by adding more instances (scale-out) rather than making individual instances larger (scale-up). For many cloud apps, individual physical systems simply aren’t big enough to run the entire app using vertical scaling. The manner of how clusters of instances scale out may vary. Some types of applications are batch-oriented in the vein of traditional high-performance computing/grid while others are composed from multiple layers of services communicating through APIs. There’s also considerable variety not only in the absolute scale of application components being scheduled and orchestrated, but also in the variety of the components and requirements related to quality-of-service, latency sensitivity, frequency of scheduling, and so forth.
In general, the goal is to make these cluster components as small and simple as possible—hence microservices. Applications are then composed of these components communicating through service interfaces. Whether or not microservices are always used in their purest form, the overarching goal is to eliminate large, rigid monolithic apps in favor of more modular loosely-coupled ones. In addition to the flexibility this can provide when running applications, it’s a better fit for DevOps development practices using continuous integration/continuous delivery pipelines.
A third characteristic that is sort of implied by the other two is that instances tend to be short-lived and disposable. Containerized applications take this aspect to its logical conclusion with fully immutable instances that can’t be changed once they’re launched. However, even in the absence of containers, good practice for cloud-native workloads is to shut down instances that aren’t running properly, require a security patch, or need to be reconfigured and just start up new ones. (This contrasts with how traditional configuration management practice often involves monitoring long-running instances and making changes to a running instance as needed.)
Beyond private IaaS
Taken by itself, a private IaaS cloud like OpenStack is a great platform for scalable cloud-native apps that a business can use to put in place a much more agile infrastructure than traditional bare metal servers or virtualization provides. But it’s more than that. It’s also a scalable foundation for a container platform and, together with hybrid cloud management, it’s an important consideration in an organization’s hybrid cloud strategy. I’ll be taking a look at container platforms and hybrid cloud management in subsequent posts.
For more on cloud strategies watch our webinar series: Defining Your Cloud Strategy Step by Step
About the author
Gordon Haff is a technology evangelist and has been at Red Hat for more than 10 years. Prior to Red Hat, as an IT industry analyst, Gordon wrote hundreds of research notes, was frequently quoted in publications such as The New York Times on a wide range of IT topics, and advised clients on product and marketing strategies.