Blog abonnieren

We talk to many shops that are adopting, or have adopted, DevOps practices. For many companies, staying ahead of disruption means not only delivering new applications but also optimizing (or changing!) current processes and systems. They are moving to team-based cultures, working in smaller increments, and automating their environments to try to increase the velocity for software development and deployment.

Having a common storage underpinning that is "self-service" for developers to provision and manage storage for their applications means teams have less friction in developing and shipping applications.

For the operations team, a consistent platform for development, testing, and production deployments is a big win. And self-service for developers means operations teams spend less time on provisioning services and more time on higher-level work that benefits the business.

Thus, persistent storage for containers remains a hot topic these days. While containers do a great job of storing the application and its logic, they do not offer a built-in solution for storing application data across the lifecycle of containers.

Why won’t traditional storage solutions work for containers?

A container may run for days, hours, minutes, or even just seconds depending on how the application is architected. Containers are disposable, but your data isn’t.

Ephemeral (or local) storage is not optimal because stateful applications require that the container’s data be available beyond the life of the containers. They also require that the underlying storage layer provide all the enterprise features available (such as scalability, multi-protocol support, mirroring, stretched clusters, etc) to applications that are deployed in say, virtualized environments.

This is an important consideration given that many environments have container hosts running in virtual machines (VMs) - so you need to be able to provide the persistent storage features required for virtualization, as well as features required for containers.  It’s critical to provide persistent storage options to administrators since hypervisors have always allowed for persistent storage in one form or the other.

One approach is to use traditional storage appliances that support legacy applications. This is a natural inclination and assumption, but… the wrong one.

Traditional storage appliances are based on older architectures and were not made for a container based application world. Nor do these approaches generally offer the portability you need for your apps in today’s hybrid cloud world. Some of these storage vendors offer additional software for your containers which can be used as a go-between for these storage appliances and your container orchestration, but this approach still falls short as it is undermined by those same storage appliance limitations. This approach would also mean that storage for the container is provisioned separately (different teams, different UIs, tools, etc) from your container orchestration layer.

How does software-defined storage work for containers?

There is a better way! Software defined storage, or specifically in this case: storage containers containing storage software can co-­reside with compute containers and serve out storage from the hosts that have local or direct attached storage to the compute containers. These storage containers are deployed and provisioned using the same orchestration layer developers have adopted in house (like Kubernetes-based Red Hat OpenShift Container Platform ) just like compute containers. In this deployment scenario, storage services are provided by using containerized storage software (like Red Hat OpenShift Container Storage) to more easily and seamlessly pool and expose storage from local hosts or direct attached storage to containerized applications.

Red Hat OpenShift Container Storage is built with Red Hat Gluster Storage and offers flexible, cost-effective, and developer friendly storage for containers. It helps organizations standardize storage across multiple environments and easily integrates with Red Hat OpenShift to deliver a persistent storage layer for containerized applications that require long-term stateful storage. Enterprises can benefit from a simple, integrated solution including the container platform, registry, application development environment, and storage - all in one, supported by a single vendor.

To get a more intimate understanding of how Red Hat OpenShift and OpenShift Container Storage work together, take this free test drive.


Über den Autor

Nach Thema durchsuchen

automation icon

Automatisierung

Erfahren Sie das Neueste von der Automatisierungsplattform, die Technologien, Teams und Umgebungen verbindet

AI icon

Künstliche Intelligenz

Erfahren Sie das Neueste von den Plattformen, die es Kunden ermöglichen, KI-Workloads beliebig auszuführen

cloud services icon

Cloud Services

Mehr erfahren über Managed Cloud Services

security icon

Sicherheit

Erfahren Sie, wie wir Risiken in verschiedenen Umgebungen und Technologien reduzieren

edge icon

Edge Computing

Erfahren Sie das Neueste von den Plattformen, die die Operations am Edge vereinfachen

Infrastructure icon

Infrastruktur

Erfahren Sie das Neueste von der weltweit führenden Linux-Plattform für Unternehmen

application development icon

Anwendungen

Entdecken Sie unsere Lösungen für komplexe Anwendungsherausforderungen

Original series icon

Original Shows

Interessantes von den Experten, die die Technologien in Unternehmen mitgestalten