The OpenStack 10th release added ten new storage backends and improved testing on third-party storage systems. The Cinder block storage project continues to mature each cycle exposing more and more Enterprise cloud storage infrastructure functionalities.
Here is a quick overview of some of these key features.
Simplifying OpenStack Disaster Recovery with Volume Replication
After introducing a new Cinder Backup API to allow export and import backup service metadata in the Icehouse release, which allowed “electronic tape shipping” style backup-export & backup-import capabilities to recover OpenStack cloud deployments, the next step for Disaster Recovery enablement in OpenStack is the foundation of volume replication support at block level.
Starting with the OpenStack Juno release, Cinder has now initial support for Volume Replication which makes Cinder aware of replicas, and allows the cloud admin to define storage policies to enable replication.
With this new feature Cinder Storage backend drivers can expose different replication capabilities via volume type convention policies that enable various replication operations, such as failover, failback as well as reverse direction capabilities.
Using the new API, a volume is created with replication extra-spec that will be allocated on supported backends for replication.
Data Protection Enablement
Consistency Groups support was added to group volumes together for the purpose of application data protection (with the focus of snapshots of consistency groups for disaster recovery), where the grouping of volumes is based on the volume-type, however there is still future work to support this functionality together with Cinder backups & volume replication.
Another important aspect is also to maintain consistency at the application & filesystem level. Similar to the AWS ec2-consistent-snapshot feature that provides consistent data in the snapshot, by performing flush/freeze to the filesystem, as well as flushing and locking the database, if applicable. It is possible to achieve similar functionality in OpenStack with QEMU guest agent during image snapshotting for KVM instances, where nova-compute libvirt driver could request QEMU Guest Agent to freeze the filesystems (and applications if fsfreeze-hook is installed) during the image snapshot. The QEMU guest agent support is currently planned for the next release to help automate daily/weekly backup of instances with consistency.
Storage Management enhancements at the Dashboard level
The following Cinder API features were also added to the Horizon dashboard in the Juno cycle:
- Utilize Swift to store volume backups as well as restore volumes from these backups.
- Enabling resetting the state of a snapshot.
- Enabling resetting the state of a volume.
- Supporting upload-to-image.
- Volume retype.
- QoS (quality of service) support.
Support for Volume Pools via a new Pool-aware Scheduler
Until the Juno release, Cinder used to see each volume backend as a whole, even if the backend consisted of several smaller pools with totally different capabilities and capacities. This gap could have caused issues where a backend may appear to have enough capacity to create a copy of a volume but in fact failed to do so. Extending Cinder to support storage pools within volume backend has also improved Cinder scheduler decision making, that is now aware of storage pools within backend and also use them as finest granularity for resource placement.
Another Cinder Scheduling aspect was addressed with a new Volume Num Weighter feature that enables the user to choose a volume backend according to free_capacity and allocated_capacity. The volume number weighter feature lets the scheduler choose a volume backend based on its volume number in the volume backend, to improve volumes IO balancing performance.
Glance Generic Catalog
The Glance Image Service introduced artifacts as a broader definition for images during Juno. The scope is expanding the image repository to a generic catalog of various data assets. It is possible now to manage a catalog of metadata definitions where users can register the metadata definitions to be used on various resource types including images, aggregates, and flavors. Support for viewing and editing the assignment of these metadata tags is included in Horizon. Other key new features included asynchronous processing and Image download improvements such as:
- Restart of partial download (solved a problem associated with downloads of very large images that may be interrupted prior to completion, due to dropped connections)
- A new download image restriction policy, to restrict users from downloading an image based on policy.
Introducing Swift Storage Policies
This large feature has been finally released in the Juno cycle of the OpenStack Object Storage project, allowing users more control over cost and performance in terms of how they want to replicate and access data across different backends and geographical regions.
Storage Policies allow for some level of segmenting the cluster for various purposes through the creation of a multiple object ring. Once configured, users can create a container with a particular policy.
Storage Policies can be set for:
- Different Storage implementations:
- Different Diskfile (e.g. GlusterFS, Kinetic) for a group of nodes
- Different levels of replication
- Different Performance profiles (e.g. SSD-only)
Other new Swift new notable feature include the Multi-ring awareness with support for:
- Object replication - added to be aware of the different locations on-disk that storage policies introduced
- Large objects - refactoring work for storage policies
- Object auditing – added to be aware of the different on-disk locations for objects in different storage policies
- Improved partition placement - allows for better capacity adjustments, especially when adding a new region to existing clusters.
The swift-ring-builder has been updated as well to first prefer device weight, then use failure domains to break ties.
The progress on multi-ring and storage policies is the foundation to the Swift Erasure coding development that is on-going in the Kilo release cycle. Erasure coding is a storage policy with its own ring and configurable set of parameters designed to reduce storage costs associated with massive amounts of data (both operating costs and capital costs) by providing an option that maintains the same, or better, level of durability using much less disk space, especially for “Warm” storage use cases, such as when performing volume backup to a Swift object storage system, as backups are typically large compressed objects and are infrequently read once they have been written to the storage system.
To learn more about the new OpenStack storage features see OpenStack 2014.2 (Juno) Release notes.
執筆者紹介
チャンネル別に見る
自動化
テクノロジー、チームおよび環境に関する IT 自動化の最新情報
AI (人工知能)
お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート
オープン・ハイブリッドクラウド
ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。
セキュリティ
環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報
エッジコンピューティング
エッジでの運用を単純化するプラットフォームのアップデート
インフラストラクチャ
世界有数のエンタープライズ向け Linux プラットフォームの最新情報
アプリケーション
アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細
オリジナル番組
エンタープライズ向けテクノロジーのメーカーやリーダーによるストーリー
製品
ツール
試用、購入、販売
コミュニケーション
Red Hat について
エンタープライズ・オープンソース・ソリューションのプロバイダーとして世界をリードする Red Hat は、Linux、クラウド、コンテナ、Kubernetes などのテクノロジーを提供しています。Red Hat は強化されたソリューションを提供し、コアデータセンターからネットワークエッジまで、企業が複数のプラットフォームおよび環境間で容易に運用できるようにしています。
言語を選択してください
Red Hat legal and privacy links
- Red Hat について
- 採用情報
- イベント
- 各国のオフィス
- Red Hat へのお問い合わせ
- Red Hat ブログ
- ダイバーシティ、エクイティ、およびインクルージョン
- Cool Stuff Store
- Red Hat Summit