|
Component
|
Capabilities |
| Distributed scalability |
| Scale-out architecture |
Grow a cluster to thousands of nodes; replace failed nodes and conduct rolling hardware upgrades while data is live |
| Object store scalability |
Continued object store scalability improvements, with scalability to more than 10 billion objects serving the Amazon Web Services (AWS) S3 and OpenStack Swift protocols |
| Self-healing and rebalancing |
Peer-to-peer architecture balances data distribution throughout the cluster nodes and handles failures without interruption, automatically recovering to the desired predefined data resiliency level
|
| Rolling software upgrades |
Clusters upgraded in phases with no downtime so data remains available to applications
|
| API and protocol support |
| Object, block, and file storage |
Cloud integration with the object protocols used by AWS S3 and OpenStack Swift; block storage integrated with OpenStack, Linux®, and Kernel-based Virtual Machine (KVM) hypervisor; CephFS highly available, scale-out shared filesystem for file storage; support for Network File System (NFS) v4 and native Ceph protocol via kernel and user space (FUSE) drivers
|
| REST management API |
Ability to manage all cluster and object storage functions programmatically for automation and consistency by not having to manually carry out provisioning
|
| Multiprotocol with NFS, iSCSI, and AWS S3 support |
Ability to build a common storage platform for multiple workloads and applications based on industry-standard storage protocols
|
| New Ceph filesystem capabilities |
New access options through NFS, enhanced monitoring tools, disaster recovery support, and data reduction with erasure coding
|
| Ease of management |
| New manageability features |
Integrated (cephadm) control plane, installation user interface, stable management API, failed drive replacement workflows, staggered upgrade policies, and object multisite monitoring dashboard |
| Automation |
Integrated Ceph-aware control plane, based on Cephadm and the Ceph Manager orchestration module encompassing Day 1 and Day 2 operations, including simplified device replacement and cluster expansion; cluster definition files encompass the entire configuration in a single exported file, and the REST management API offers further automation possibilities. Cephadm-Ansible wrapper enables management with Ansible.
|
| Management and monitoring |
Advanced Ceph monitoring and diagnostic information integrated in the built-in monitoring dashboard with graphical visualization of the entire cluster, including cluster-wide and per-node usage and performance statistics; operator-friendly shell interfaces for management and monitoring, including top-styled in-terminal visualization
|
| Security |
| Authentication and authorization |
Integration with Microsoft Active Directory, lightweight directory access protocol (LDAP), AWS Auth v4, and KeyStone v3
|
| Policies |
Limit access at pool, user, bucket, or data levels. Orchestration of secure role-based access control (RBAC) policies.
|
| WORM governance |
AWS S3 object lock with read-only capability to store objects using a write-once-read-many (WORM) model, preventing objects from being deleted or overwritten.
|
| FIPS 140-2 support |
Validated cryptographic modules when running on certified Red Hat Enterprise Linux versions (currently 8.2)
|
| External key manager integration |
Key management service integration with Hashicorp Vault, IBM Security Guardium Key Lifecycle Manager (SGKLM), OpenStack Barbican, and OpenID Connect (OIC) identity support; compatible with any KMIP-compliant key management infrastructure
|
| Encryption |
Implementation of cluster-wide, at-rest, or user-managed inline object encryption; operator-managed encryption keys and user-managed encryption keys are supported.
|
| Red Hat Enterprise Linux |
Mature operating system recognized for its high security and backed by a strong open source community; Red Hat Enterprise Linux subscriptions included at no extra charge. |
| Reliability and availability |
| Highly available and highly resilient |
Highly available and resilient out of the box, with default configurations able to withstand loss of multiple nodes (or racks) without compromising service availability or data safety
|
| Striping, erasure coding, or replication across nodes |
Full range of data reduction options, including replica 2 (2x), replica 3 (3x), and erasure coding for object, block and file, inline object compression, and backend compression
|
| Dynamic volume sizing |
Ability to expand Ceph block devices with no downtime
|
| Storage policies |
Configurable data placement policies to reflect service-level agreements (SLAs), performance requirements, and failure domains using the Controlled Replication Under Scalable Hashing (CRUSH) algorithm
|
| Snapshots |
Snapshots of individual block devices with no influence on downtime or performance
|
| Copy-on-write cloning |
Instant provisioning of tens or hundreds of virtual machine instances from the same image with zero wait time
|
| Support services |
SLA-backed technical support with streamlined product defect resolution and hot-fix patch access; consulting, service, and training options
|
| Performance |
| Increased virtual machine performance |
Better performance for virtual machines with faster block performance than previous releases, LibRBD data path optimization, and CephFS ephemeral pinning
|
| Updated cache architecture |
New read-only large object cache offloads object reads from the cluster, with improved in-memory write-around cache; optional Intel Optane low-latency write cache option (tech preview)
|
| Improved performance |
Achieved random object read performance approaching 80 GiB/s sustained throughput with hard disk drives (HDDs); better block performance with a shortened client input/output (I/O) path
|
| Client-cluster data path |
Clients share their I/O load across the entire cluster
|
| In-memory client-side caching |
Enhanced client I/O using a hypervisor cache
|
| Write-back cache |
Persistent, fault-tolerant write-back cache targeted with Intel Optane Persistent Memory and SSD devices greatly reduces latency and also improves performance at low io_depths
|
| Server-side journaling |
Accelerated data write performance with serialized writes
|
| Geo replication support and disaster recovery |
| Global clusters |
Global namespace for object users with read and write affinity to local clusters, reflecting the zones and region topology of AWS S3
|
|
Multisite
|
Support for dynamic bucket resharding and mirroring for multisite operations delivering consistent data and bucket synchronization
|
| Disaster recovery |
Object multisite replication suitable for disaster recovery, data distribution, or archiving; block and file snapshot replication across multiple clusters for disaster recovery; streaming block replication for zero recovery point objective (RPO=zero) configurations
|
| Efficiency and cost-effectiveness |
| Containerized storage daemons |
Reliable performance, better utilization of cluster resources, and decreased hardware footprint, with the ability to colocate Ceph daemons on the same machine, significantly improving total cost of ownership for small clusters
|
| Industry-standard hardware |
Optimized servers and storage technologies from Red Hat’s hardware partners, tailored to meet each customer’s needs and diverse workloads
|
| Improved resource consumption for small objects |
Previous backend allocation size has been reduced four-fold for solid state drives (SSD) and sixteen-fold for hard disk drives (HDD), significantly reducing overhead for small files under 64KB in size
|
| Faster erasure coding recovery |
Erasure coding recovery with K shards (rather than K+1 shards required previously), results in improved data resiliency when recovering erasure coded pools after a hardware failure
|
| Thin provisioning |
Sparse block images support over-provisioning of storage and immediate virtual or container instance launch
|