Select a language
by Veda Shankar, Product Marketing @ Red Hat
Red Hat Storage (RHS) is delivering a different approach to the storage requirements of today’s data center: Open, software-defined storage (SDS). This approach differs from the traditional, hardware-centric data storage model, which limits a business’s flexibility by tying it to proprietary vendor solutions. With Red Hat’s storage offering, RHS, the design focuses on solving storage challenges with software at a petabyte scale.
What kinds of challenges are being solved? Data protection is chief amongst the challenges of storing data at petabyte scale. Most IT organizations are currently grappling with this in some form. While every business has unique data storage requirements, RHS is particularly valuable for organizations who require backup and long-term archival. With its ability to linearly scale capacity and performance, RHS is an ideal candidate for backup and disaster recovery. It gives users the ability to co-locate their data by geo-replicating data between data centers, both on-premise and in the public cloud.
What is geo-replication? Geo-replication is an important component of RHS, primarily used for disaster recovery. A typical deployment will consist of two or more nodes or servers. The data that is being written is being asynchronously copied or transferred to a remote data center. The remote data center will also have two or more servers that run RHS. They create another volume that’s replicated, which is to say it is a copy of the original site. Because it is hosted at a remote location, the replication can’t happen synchronously, but it does happen constantly. If a business’s primary site goes down, their remote site becomes active and clients can access it for the data, making geo-replication an integral component of maintaining business continuity. RHS currently supports multiple large customers with hundreds of servers running critical applications who rely on this capability.
Why does RHS prioritize geo-replication? Replication is a multi-fold challenge for scale-out storage architecture, which is why RHS has prioritized improvements in this area. Our latest release, Red Hat Storage 2.1, features improved geo-replication that can traverse very large storage deployments, detecting data changes within the scale-out file system in a very short time. This is the result of improvements in the detection of file changes that happen in a distributed file system (FS).
What’s next? Improvements in geo-replication efficiency are evident in very large-scale deployments that can now handle millions of customers, simultaneously. Further improvements are expected early next year when customers should have the ability to have active/active geo-replication on both their primary and remote sites. The existing ‘Master/Slave’ model will be replaced, which is beneficial for applications that have active data centers at multiple locations that are constantly syncing data with each other.