Wählen Sie eine Sprache
By Alok Srivastava, Senior Product Manager, Red Hat Gluster Storage and Data, Red Hat
Container-native storage, faster self-healing, sharding, and more
It’s a great time to be a storage aficionado! Last week, we announced Red Hat Ceph Storage 2. Today, we’re thrilled to announce the general availability of Red Hat Gluster Storage 3.1.3.
Building on momentum
Red Hat Gluster Storage has enjoyed strong momentum in terms of customer success and community growth. We’ve added a number of enterprise-class features over the past 3 to 4 years that have significantly enhanced performance, reliability, durability, and security.
Software-defined storage offers the best of both worlds—the flexibility to grow storage incrementally and reuse existing industry-standard hardware, while also taking advantage of the latest innovation in storage controller software and hardware components. For more on Red Hat Gluster Storage 3.1.3 features, check out the following video.
The 3.1.3 release of Red Hat Gluster Storage includes a number of feature enhancements that enable greater performance, reliability, and faster self-healing, including deep integration of Red Hat Gluster Storage with other Red Hat products.
Persistent storage for containers
You may have already seen our blog post from earlier today on container-native storage for OpenShift Container Platform. Earlier this year, we announced a containerized image of Red Hat Gluster Storage. This release moves a step further and enables converged storage containers that can co-reside with application containers on the same host. Shared resources between application and storage help in overall TCO reduction. Containers are deployed and provisioned using an enhanced Heketi module.
Container-native storage provides storage services to the application containers by pooling and exposing storage from either local hosts or direct-attached storage.
All at once or one at a time? While the debate between single- and multi-threaded approach is never ending, Red Hat Gluster Storage self-heal certainly does better for some workloads when it is parallelized. This release of Red Hat Gluster Storage allows you to perform self-heal in parallel. Multi-threaded self-heal will be most useful with a large number of small files (e.g., sharded VM images). Facebook is the primary contributor to multi-threaded self-heal in the Gluster community.
Sharding refers to breaking a large file into tinier chunks. Sharding splits large virtual machine (VM) image files into small blocks of configurable size. This results in faster self-healing with reduction in CPU usage, which helps the hyperconvergence of Red Hat Gluster Storage with Red Hat Enterprise Virtualization and live VM use case.
Geo-replication feature of Red Hat Gluster Storage is also sharding aware for these two use cases so that only required shards/ fragments are replicated.
Integration with VSS
We heard you! You need not call up your storage administrator if you are a Windows user and need to browse through the previous version of any file/folder. Red Hat Gluster Storage is now integrated with Volume Shadow Copy Service (VSS) of Microsoft Windows. Red Hat Gluster Storage supports the viewing and accessing of snapshots.
Have more network adapters? Get better SMB performance! SMB Multichannel is a feature of SMB 3.0 protocol that increases the network performance and availability of Red Hat Gluster Storage servers. SMB Multichannel allows use of multiple network connections simultaneously and provides increased throughput along with network fault tolerance. SMB Multichannel is provided as a technical preview feature with Red Hat Gluster Storage 3.1.3, and we intend to fully support it soon.
Easy installation of hyperconverged setup
We've ensured that the installation of the hyperconverged setup of Red Hat Gluster Storage and Red Hat Enterprise Virtualization is easy enough for you. The Ansible-based gdeploy tool is enhanced for the automated installation of hyperconverged setups.
Kilo refresh for Gluster Swift
We've refreshed Gluster Swift to support OpenStack Kilo for RHEL 7-based Red Hat Gluster Storage. RHEL 6-based Red Hat Gluster Storage continues to support Openstack Icehouse.
Scheduling of geo-replication
Periodic scheduling of geo-replication allows administrators to synchronize data between clusters during non-peak hours. We have detailed performance and sizing guides available later this year, with prescriptive guidance to tweak the right price/performance mix for your workloads.
Find us at Red Hat Summit
Red Hat Storage has an impressive presence at this year’s conference, with key announcements around object storage with Red Hat Ceph Storage and container-native storage with Red Hat Gluster Storage. Stop by Pods 31 and 32 of Booth 508 on the expo floor, speak with storage experts, or attend one of our sessions. You could even win a wicked-cool Amazon Echo (as seen in the Baldwin ads)!