In a recent post, I discussed using containers to deploy software-defined storage for cloud-native applications. This lets persistent storage be provisioned in close proximity to applications needing access to data.
Storage technologies such as Gluster can also make use of containers to help span hybrid cloud infrastructures including on-premise physical servers, enterprise virtualized environments, and clouds both private and public. In this way, Gluster provides data portability and flexibility for large, semistructured or unstructured data sets. Gluster’s file-based architecture supports data access through multiple file and object protocols and is integrated with Red Hat OpenShift Container Platform—thereby helping developers to architect applications requiring persistent storage within Kubernetes clusters.
Running Ceph in a container
However, containers can play other roles in configuring storage. Take the case of containerized storage daemons in the new Red Hat Ceph Storage release. These can be thought of as using containers to optimize storage deployments as opposed to “just” providing a service for developers writing applications that need storage.
Red Hat Ceph Storage can be natively deployed with Red Hat OpenStack Platform using our OpenStack Platform director, the tool provided with Red Hat OpenStack Platform, to automatically configure storage services (Nova, Cinder, Manila, Glance, Keystone, Ceilometer, Swift, and now even Gnocchi) with the Ceph backend. Ceph therefore enables the data associated with an OpenStack installation to be stored on a single platform and to manage block, object, and file data through tight integration with OpenStack services. Red Hat Ceph Storage has a massively distributed architecture and is often the best choice when the platform as a whole (rather than individual applications running on the platform) requires storage.
Containerized storage daemons (CSDs) are, in effect, a new Ceph deployment option that helps a platform’s operations team configure storage more efficiently. It’s part of a broader Red Hat (and industry) effort to deliver enterprise software in containers. Over time, we expect containerized software delivery to be the default approach.
With CSDs, you configure individual daemons inside containers with apportioned CPU and RAM. In this way, Red Hat Ceph Storage users can safely co-locate daemons on the same machine without worrying about resource conflicts. Containers are designed to ensure that daemons do not starve each other of resources in peak load and recovery situations, something that used to be addressed at higher cost by isolating these services on dedicated hardware. (Resource control is provided by the control groups (cgroups) feature in Linux. It ensures that a container may only use a defined amount of certain system resources, such as disk I/O, memory, or CPU.)
Ceph currently has eight different daemons. One of them is ceph-osd, the object storage daemon for the Ceph distributed file system. It is responsible for storing objects on a local file system and providing access to them over the network. Another is ceph-mds, the metadata server daemon for the Ceph distributed file system. One or more instances of ceph-mds collectively manage the file system namespace, coordinating access to the shared storage cluster.The use of containers can also drive significantly better hardware utilization and lower costs. For example, encapsulating daemons in containers means that dedicated hosts are no longer needed for cluster monitoring (ceph-mon) or gateways to object, NFS, and iSCSI storage.
Containers and storage
The co-evolution of software-defined storage and containers highlights important ways in which the whole of today’s software landscape is more than the sum of specific projects.
Software-defined storage has a wide range of inherent benefits. It lets users choose their own hardware components and avoid being locked into expensive, proprietary hardware-based appliances. It’s a flexible scale-out architecture. It unifies different types of storage.
But those features also are important in how they interact with other important trends and technologies. The characteristics of a scale-out open source storage platform make it a particularly good fit with an open source scale-out cloud platform like OpenStack. Many modern workloads such as data analytics wouldn’t be practical at scale without economical storage that couldn’t also scale out and scale big.
Which brings us back to containers.
The containerization of applications is happening so rapidly in part because the environment is right for them. They’re a great match with microservices and lightweight immutable services more broadly for example. Their popularity has also come about because they’ve helped spawn a wide range of other cloud-native projects, such as Kubernetes, and have helped drive standards through organizations like the Open Container Initiative (OCI) that make containers even more useful.
Storage is part of that equation too. Software-defined storage technologies can provide the persistent storage needed by many containerized applications across hybrid environments. And, as we’ve seen with today’s Red Hat Ceph Storage and the recent Container-Native Storage announcements, storage platforms can also use containers to make their own deployments simpler, more flexible, and even less costly.