Today Red Hat announced Red Hat Ceph Storage 4, a major release that brings a number of improvements in scalability, monitoring, management, and security improvements. We also have designed Ceph Storage 4 to be easier to get started. Let's tour some of its most interesting features.

What's Ceph?

Red Hat Ceph Storage is an open, massively scalable, software-defined storage platform for petabyte-scale deployments. Intended for modern workloads like data analytics, AI/ML, cloud infrastructure, media repositories, backup and restore systems, and more, it’s engineered to be flexible and help solve the problems you have today while being able to meet new challenges down the road.

If your organization needs a cost-effective, scalable, and versatile storage option that can run on commodity hardware, you're probably already thinking about (or using!) Ceph. Here's some of the things that have been added in our most recent release.

Fortified front end

You can use Red Hat Ceph Storage to set up Amazon Simplified Storage Service (S3) compatible object storage on your own hardware and interact with it via Ceph’s HTTP gateway using the Amazon S3 or OpenStack Swift API. 

In this release, we are also introducing Beast.Asio as the default front-end for the Object Store Gateway (RGW), as the default web front end for Red Hat Ceph Storage. Beast can deliver better aggregated bandwidth with fewer resources than before, and enables the Ceph RADOS Gateway (RGW) to serve additional connections with a smaller memory footprint per thread.

In version 4 we also added support for S3-compatible storage classes you can use to better control data placement for applications that need it.  Whether you’re ingesting massive amounts of data from Internet of Things (IOT) sources, creating machine learning models for artificial intelligence, or archiving data for infrequent retrieval or data governance, Ceph storage classes can help you manage storage costs while still achieving the performance levels you need. 

Boosted back end

We didn’t just improve the solution’s front end—We've introduced BlueStore as the default backend for Ceph Object Store Daemons (OSD) with Red Hat Ceph Storage 4 and published our internal benchmark results that demonstrate markedly improved performance. BlueStore allows the OSDs to write directly to disk and also provides a more streamlined metadata store through RocksDB and a write-ahead log that greatly enhances both bandwidth and IO throughput performance. As a result, in internal testing we've demonstrated better than twice the object write performance and lower latency than we were delivering just a year ago. 

Simplified operations

If you need to do something more than once, consider automating it. That's the mantra of many a system administrator, and we've taken that to heart. Red Hat Ceph Storage 4 leans hard on Ansible to make the solution easier to install and manage. 

How easy, you might be wondering? In version 4 we added the ability to install a Ceph cluster using the Cockpit web-based interface. Under the hood, we're using the Ceph install Ansible playbooks.

The new Red Hat Ceph Storage 4 dashboard offers new monitoring functionality, and administrators can manage storage volumes, create users, monitor performance, and even initiate cluster upgrades from it. You still have all the command line tools you know and love, but version 4 lets you use them through the web interface.

Storage specialists can even delegate some operations to others in the administration team, as well as empower junior administrators, enhancing skill development and increasing operational efficiency. This allows teams to give appropriate permissions to those who need to do some day-to-day operations while still retaining control and oversight where needed. 

We’re also introducing a new noisy neighbor monitoring feature which can help you visualize IOPS, throughput and latency outliers to pre-emptively identify issues and mitigate them.

Enhanced utilities

You're probably already familiar with the ceph-medic utility to detect issues that might be hampering Ceph operations. For bare metal installs, the utility uses an Ansible inventory file to identify hosts and non-interactive SSH to reach out to Ceph hosts and get a read on their installation and configuration.

In Ceph Storage 4, ceph-medic allows you to connect to a containerized cluster using several types (docker, podman, kubernetes, or openshift). If you're using the openshift or kubernetes deployment type, you don't even need the inventory filethe host files are generated dynamically and grouped by daemon type. 

Improved Ceph File System

With Ceph Storage 4, we see a few new features that we think will be useful for storage administrators. 

Prior to Ceph 4, you'd need to dive into the Metadata server logs to see status on ongoing Ceph File System (CephFS) scrubs status. Now admins can turn to ceph -w (which is used to display a running summary of the status of a Ceph cluster) and they'll see information on the status of active scrubs. 

And more

Ceph Storage 4 has a number of other improvements we're very proud of, including a smaller starting point for Ceph installs. You just need three nodes to get started with Ceph Storage 4, but for organizations with big storage needs we can scale to exabytes of data and over one billion objects (based on internal testing). 

We’ve also improved encryption and lowered the administrative burden with Ceph Storage 4. We have a standard support lifecycle of three years, plus an optional two-year ELS lifecycle for organizations that need it.

Get to the heart of your data

In a world where data is exploding all around us, choosing the right storage platform is increasingly important. At the heart of your applications is data and our efforts in this release are aimed at making your data more accessible to you and your applications. Customer information, videos, data that feeds AI/ML applications, the list is almost endless and we rarely see the need for storage reduced in our environments. 

So we're happy to be able to take the wraps off Red Hat Ceph Storage 4 and start turning it loose on customer workloads. For more on Ceph Storage 4, see the release notes.