Search
English
English
Log in Account
Log in / Register Account
All Red Hat
Datasheet

Red Hat Ceph Storage: Unified storage for demanding workloads

Description

 

 

Product overview

Organizations are starting to understand the insights and opportunities that effective data management can present to their businesses. More than just accommodating the growing need for storage, data now offers an opportunity to disrupt existing competitive business models by facilitating continuous innovation.

Red Hat® Ceph® Storage provides a robust and compelling data storage solution that can support your data, no matter the format or origin. As a self-healing, self-managing platform with no single point of failure, Red Hat Ceph Storage significantly lowers the cost of storing enterprise data and helps companies manage exponential data growth in an automated fashion. Red Hat Ceph Storage is optimized for large installations—efficiently scaling to multiple petabytes or greater. Unlike traditional network-attached storage (NAS) and storage area network (SAN) approaches, it does not become dramatically more expensive as a cluster grows. Red Hat Ceph Storage also supports increasingly popular containerized environments such as Red Hat OpenShift® Container Platform.

Red Hat Ceph Storage is suitable for a wide range of storage workloads, including:

  • Data analytics and artificial intelligence/machine learning (AI/ML). As a data lake, Red Hat Ceph Storage uses object storage to deliver massive scalability and high availability to support demanding multitenant analytics and AI/ML workloads.
  • Object storage-as-a-service. Red Hat Ceph Storage is ideal for implementing an object storage service, with proven scalability and performance for both small and large object storage.
  • Hybrid cloud applications. With support for the Amazon Web Services (AWS) Simple Storage Service (S3) interface, applications can access their storage with the same application programming interface (API)—in public, private, or hybrid clouds.
  • OpenStack® applications. Red Hat Ceph Storage offers scalability for OpenStack deployments, including Red Hat OpenStack Platform.
  • Backups. A growing list of software vendors have certified their backup applications with Red Hat Ceph Storage, making it easy to use a single storage technology to serve a wide variety of performance-optimized workloads.

Red Hat Ceph Storage features and benefits

Component Capabilities
Exabyte scalability
Scale-out architecture  Ability to grow cluster to thousands of nodes without forklift upgrades and data migration projects
Automatic rebalancing  Peer-to-peer architecture that seamlessly handles failures and ensures data distribution throughout the cluster
Rolling software upgrades  Clusters upgraded in phases with no or minimal downtime
API and protocol support
Object, block, and file storage

Seamless cloud integration with object protocols used by AWS S3 and OpenStack Swift; block storage integrated with OpenStack, Linux®, and Kernel-based Virtual Machine (KVM) hypervisor; CephFS highly available, scale-out shared filesystem for file storage; support for network file system (NFS) v4 and native API protocols
RESTful Ability to manage all cluster and object storage functions programmatically for independence and speed by not having to manually provision storage
Multiprotocol with NFS, iSCSI, and object support Ability to build a common storage platform for multiple workloads and applications
Management and security
Automation  Red Hat Ansible® Automation Platform-based deployment
Management and monitoring Advanced Ceph monitoring and diagnostic information with integrated on-premise monitoring dashboard and graphical visualization of entire cluster of single components, including cluster and per-node usage and performance statistics
Authentication and authorization

Integration with Microsoft Active Directory, lightweight directory access protocol (LDAP), AWS Auth v4, and KeyStone v3
Policies Limit access at pool, user, bucket, or data levels
Encryption  Implementation of cluster-wide, at-rest, or user-managed inline object encryption
Red Hat Enterprise Linux Deployment on mature operating system recognized for its high security and backed by a collaborative open source community
Reliability and availability
Striping, erasure coding, or replication across nodes Data durability, high availability, and high performance with support for multisite and disaster recovery
Dynamic block sizing  Ability to expand or shrink Ceph block devices with no downtime
Storage policies  Configurable data placement to reflect service-level agreements (SLAs), performance requirements, and failure domains using the Controlled Replication Under Scalable Hashing (CRUSH) algorithm
Snapshots  Snapshots of an entire pool or individual block devices
Support services  SLA-backed technical support with streamlined product and hot-fix patch access. Consulting, service, and training options.
Performance
BlueStore backend  Up to 2x performance improvement over the traditional FileStore backend2
Client-cluster data path  Clients share their input/output (I/O) load across the entire cluster
Copy-on-write cloning 

Instant provisioning of tens or hundreds of virtual machine instances from the same image
In-memory client-side caching Enhanced client I/O using a hypervisor cache
Server-side journaling  Accelerated data write performance with serialized writes
Geo replication support and disaster recovery
Zones and regions  Object storage topologies of AWS S3
Global clusters Global namespace for object users with read and write affinity to local clusters
Disaster recovery Enablement of multisite replication for disaster recovery, data distribution, or archiving
Cost-effectiveness
Containerized storage daemons

Reliable performance, better utilization of cluster hardware, and decreased configuration footprint, with ability to co-locate daemons on the same machine
Industry-standard hardware Optimal price and performance mix of servers and disks tailored to each workload
Thin provisioning  Sparse block images enable over-provisioning of cluster and immediate instance creation
Heterogeneity  No need to replace older hardware as newer nodes are added
Striped erasure coding  Cost-effective data durability option

Technical requirements

Description  Minimum requirement
Exabyte scalability
Host operating system  • Red Hat Enterprise Linux 7.5 and higher
• Ubuntu 16.04
Hardware requirements • Minimum 2-core 64-bit x86 processors per host, minimum of 2GB of RAM per OSD process, 16GB of RAM per monitor host. Minimum 3 storage hosts with 10 recommended