订阅内容

By Douglas Fuller, Red Hat Ceph Storage Engineering

If you missed last week’s huge announcement about Red Hat Ceph Storage 3, you can find details here. To quickly get you up to speed, though, the big news in this release is around enabling a large variety of storage needs in OpenStack, easing migration from legacy storage platforms, and deploying enterprise storage in Linux containers.

CephFS is here!

One of the highlights of the Red Hat Ceph Storage 3 announcement was production support for CephFS. This delivers a POSIX-compliant shared file system layered on top of massively scalable object storage. Client support is available in the Red Hat Enterprise Linux 7.4 kernel and via FUSE. CephFS leverages Ceph’s RADOS object store for data scalability as well as a natively clustered metadata server (MDS) for metadata scalability, high availability, and performance.

One cluster to do it all

Red Hat Ceph Storage uses the CRUSH structured data distribution scheme, enabling users to deploy a highly scalable and reliable file system using industry-standard, commodity hardware. Expensive, custom-engineered RAID controllers are no longer necessary. Expanding a CephFS deployment is as easy as expanding a Ceph cluster—CRUSH smoothly manages cluster changes, including expansions with new or different hardware.

Have a hybrid storage cluster with SSDs, HDDs, and NVMe devices? CRUSH can divide your storage workload across any and all devices for maximum performance where you need it and maximum capacity at commodity cost where you don’t. This allows disparate workloads—such as scratch, home, or archive data—to coexist in the same cluster using different or overlapping hardware as needed.

In addition, CephFS’s MDS may be dynamically provisioned and resized online to maximize performance and scalability. For metadata-intensive workloads, the Ceph MDS cluster can repartition its workload, either statically or dynamically, online in response to demand. It’s also fault-tolerant by design, with no need for passive standby or expensive and complex “Shoot the Other Node in the Head” (STONITH) configurations to maintain constant availability.

Take the “cluster” out of cluster management

Red Hat Ceph Storage 3 deploys with Red Hat Ansible Automation, integrating smoothly into existing cluster management environments. Now you can deploy and manage compute and storage both using Ansible playbooks.

New in Red Hat Ceph Storage 3 is a REST API for cluster data and management tools. Monitoring tools are available out of the box to provide detailed health and performance data across your Ceph cluster.

A million uses and counting

Red Hat Ceph Storage offers great flexibility to customers. It can be deployed across a wide variety of storage applications, allowing enterprises to manage one unified system supporting block, file, and object interfaces. With the added flexibility of iSCSI support, users from heterogeneous environments—such as VMware and Windows—can leverage the power of the storage platform.

This flexibility is extremely attractive to organizations such as academic research institutions, many of which are participating in the SuperComputing17 conference in Denver this week. Their IT departments have the onerous task of supporting complicated workflows and yet have to work with shoestring budgets in many cases.

To learn more, check out this additional blog post, and join us at the Red Hat SC17 booth (1763) for presentations, swag, and more.


关于作者

UI_Icon-Red_Hat-Close-A-Black-RGB

按频道浏览

automation icon

自动化

有关技术、团队和环境 IT 自动化的最新信息

AI icon

人工智能

平台更新使客户可以在任何地方运行人工智能工作负载

open hybrid cloud icon

开放混合云

了解我们如何利用混合云构建更灵活的未来

security icon

安全防护

有关我们如何跨环境和技术减少风险的最新信息

edge icon

边缘计算

简化边缘运维的平台更新

Infrastructure icon

基础架构

全球领先企业 Linux 平台的最新动态

application development icon

应用领域

我们针对最严峻的应用挑战的解决方案

Original series icon

原创节目

关于企业技术领域的创客和领导者们有趣的故事