フィードを購読する

By Douglas Fuller, Red Hat Ceph Storage Engineering

If you missed last week’s huge announcement about Red Hat Ceph Storage 3, you can find details here. To quickly get you up to speed, though, the big news in this release is around enabling a large variety of storage needs in OpenStack, easing migration from legacy storage platforms, and deploying enterprise storage in Linux containers.

CephFS is here!

One of the highlights of the Red Hat Ceph Storage 3 announcement was production support for CephFS. This delivers a POSIX-compliant shared file system layered on top of massively scalable object storage. Client support is available in the Red Hat Enterprise Linux 7.4 kernel and via FUSE. CephFS leverages Ceph’s RADOS object store for data scalability as well as a natively clustered metadata server (MDS) for metadata scalability, high availability, and performance.

One cluster to do it all

Red Hat Ceph Storage uses the CRUSH structured data distribution scheme, enabling users to deploy a highly scalable and reliable file system using industry-standard, commodity hardware. Expensive, custom-engineered RAID controllers are no longer necessary. Expanding a CephFS deployment is as easy as expanding a Ceph cluster—CRUSH smoothly manages cluster changes, including expansions with new or different hardware.

Have a hybrid storage cluster with SSDs, HDDs, and NVMe devices? CRUSH can divide your storage workload across any and all devices for maximum performance where you need it and maximum capacity at commodity cost where you don’t. This allows disparate workloads—such as scratch, home, or archive data—to coexist in the same cluster using different or overlapping hardware as needed.

In addition, CephFS’s MDS may be dynamically provisioned and resized online to maximize performance and scalability. For metadata-intensive workloads, the Ceph MDS cluster can repartition its workload, either statically or dynamically, online in response to demand. It’s also fault-tolerant by design, with no need for passive standby or expensive and complex “Shoot the Other Node in the Head” (STONITH) configurations to maintain constant availability.

Take the “cluster” out of cluster management

Red Hat Ceph Storage 3 deploys with Red Hat Ansible Automation, integrating smoothly into existing cluster management environments. Now you can deploy and manage compute and storage both using Ansible playbooks.

New in Red Hat Ceph Storage 3 is a REST API for cluster data and management tools. Monitoring tools are available out of the box to provide detailed health and performance data across your Ceph cluster.

A million uses and counting

Red Hat Ceph Storage offers great flexibility to customers. It can be deployed across a wide variety of storage applications, allowing enterprises to manage one unified system supporting block, file, and object interfaces. With the added flexibility of iSCSI support, users from heterogeneous environments—such as VMware and Windows—can leverage the power of the storage platform.

This flexibility is extremely attractive to organizations such as academic research institutions, many of which are participating in the SuperComputing17 conference in Denver this week. Their IT departments have the onerous task of supporting complicated workflows and yet have to work with shoestring budgets in many cases.

To learn more, check out this additional blog post, and join us at the Red Hat SC17 booth (1763) for presentations, swag, and more.


執筆者紹介

UI_Icon-Red_Hat-Close-A-Black-RGB

チャンネル別に見る

automation icon

自動化

テクノロジー、チームおよび環境に関する IT 自動化の最新情報

AI icon

AI (人工知能)

お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート

open hybrid cloud icon

オープン・ハイブリッドクラウド

ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。

security icon

セキュリティ

環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報

edge icon

エッジコンピューティング

エッジでの運用を単純化するプラットフォームのアップデート

Infrastructure icon

インフラストラクチャ

世界有数のエンタープライズ向け Linux プラットフォームの最新情報

application development icon

アプリケーション

アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細

Original series icon

オリジナル番組

エンタープライズ向けテクノロジーのメーカーやリーダーによるストーリー