,欢迎您!
登录您的红帽帐户
尚未注册?下面是您应该进行注册的一些理由:
- 从一个位置浏览知识库文章、管理支持案例和订阅、下载更新以及执行其他操作。
- 查看组织内的用户,以及编辑他们的帐户信息、偏好设置和权限。
- 管理您的红帽认证,查看考试历史记录,以及下载认证相关徽标和文档。
您可以使用红帽帐户访问您的会员个人资料、偏好设置以及其他服务,具体决取决于您的客户状态。
出于安全考虑,如果您在公共计算机上通过红帽服务进行培训或测试,完成后务必退出登录。
退出红帽博客
Blog menu
Red Hat Ceph Storage 3.2 is now available! The big news with this release is full support for the BlueStore Ceph backend, offering significantly increased performance for both object and block applications.
First available as a Technology Preview in Red Hat Ceph Storage 3.1, Red Hat has conducted extensive performance tuning and testing work to verify that BlueStore is now ready for use in production environments. With the 3.2 release, Red Hat Ceph Storage has attributes that make it suitable for a wide range of use cases and workloads, including:
- Data analytics: As a data lake, Red Hat Ceph Storage uses object storage to deliver massive scalability and high availability to support demanding multitenant analytics workloads. Disparate analytics clusters can be consolidated to reduce cost of ownership, lower administrative burden, and increase service levels. BlueStore helps improve performance, while support for erasure coding helps reduce overall storage costs for data protection over simple replication.
- Hybrid cloud applications: Red Hat Ceph Storage is ideal for on-premise storage clouds. Because Red Hat Ceph Storage supports the Amazon Web Services (AWS) Simple Storage Service (S3) interface, applications can access their storage with the same API, whether in public or private clouds.
- OpenStack applications. Red Hat Ceph Storage is very popular for OpenStack applications. Red Hat Ceph Storage 3.2 can offer improved performance for OpenStack deployments, including Red Hat OpenStack Platform. Erasure coding for RADOS Block Device (RBD) is available as a Technology Preview in this release.
- Backup target. A growing list of software vendors have certified their backup applications with Red Hat Ceph Storage as a backup storage target:
- Veritas NetBackup for Symantec OpenStorage (OST) cloud backup - versions 7.7 and 8.0
- Rubrik Cloud Data Management (CDM) - versions 3.2 and later
- NetApp AltaVault - versions 4.3.2 and 4.4
- Trilio, TrilioVault - versions 3.0
- Veeam Backup & Replication - version 9.x
BlueStore performance
BlueStore is all about performance. For hard disk drive (HDD) based clusters, BlueStore architecturally removes the double-write penalty incurred by the traditional FileStore backend. Additionally, BlueStore provides significant performance enhancements in configurations that use all solid-state drives (SSDs) or Non Volatile Memory Express (NVM Express, or NVMe) drives.
The architectural shift to a BlueStore backend has already shown performance improvements on community Ceph distributions. Testing by Micron in 2018 demonstrated up to 2x increases in performance with the BlueStore over the traditional FileStore backend.
Micron conducted BlueStore vs. FileStore object testing and reported significant performance improvements in terms of both improved throughput and reduced latency.
4MB objects
100% writes
- 88% increase in throughput
- 47% decrease in average latency
70%/30% reads/writes
- 64% increase in throughput
- 40% decrease in average latency
Micron also conducted BlueStore vs. FileStore block testing and reported higher IOPS and lower latency.
4K random blocks
100% writes
- 18% higher I/O operations (IOPS)
- 5% lower average latency
- Up to 70%+ reduced 99.999% latency
70%/30% reads/writes
- 14% higher IOPS
- 80%+ lower read tail latency
- 70%+ lower write tail latency
Upgrades and new installs
Importantly, both the BlueStore and FileStore backends coexist in Red Hat Ceph Storage 3.2. Existing Red Hat Ceph Storage 2.5 and 3.1 clusters retain the FileStore backend when upgrading to version 3.2. Newly created Red Hat Ceph Storage clusters default to the BlueStore backend. Those wishing to upgrade existing clusters to the BlueStore backend should contact Red Hat Support.
For more information on how Red Hat Ceph can tackle your toughest data storage challenges, please visit our Ceph product page.