Wählen Sie eine Sprache
Please note: Red Hat Storage Server has a new name: Red Hat Gluster Storage. Learn more about it, and Red Hat Ceph Storage, here: http://red.ht/1Hw7gYb
Just in case you missed it, we've got a rundown of the questions and, of course, answers that came up during the virtual event announcing the launch of Red Hat Storage Server 3 on October 2.
Our host for the panel was Irshad Raihan, product marketing manager for Red Hat. Answering questions for Red Hat was Ranga Rangachari, VP & GM Storage and Big Data. And joining them was special guest Malik Sayed, manager of systems engineering at Verizon, a Red Hat Storage Server customer. Check for it after the jump!
How are new workloads driving the open software defined storage market?
If you look at the common thread or themes today, they are punctuated by unstructured data/storage. Compute and storage need to be co-resident to get maximum efficiencies, so these three workloads are essentially leading indicators of some of the applications generating unstructured data that need to be managed without compromising on cost and control.
Is it possible to centralize data storage with RHSS while a connected client does not use red hat products ex. Windows
Instead of giving vendors’ view on this, might be a good question for Malik Sayed at Verizon.
Thanks Ranga. Thanks for the Red Hat team for having me here. I’m pleased to be part of this event and to share our success story with Red Hat customers and the open source community. Our environment requires us to store massive amounts of human and machine data, we serve thousands of images and video in real time in a greater than 1 petabyte clustered environment. We work closely with Red Hat engineering teams to develop an HTTP-based Swift interface to help transfer data in and out of the storage footprint to increase interoperability with our Windows environment. This has enabled us to scale up and out in a single namespace and take advantage of economies of scale by adding multiple frontends to the cloud storage backend by Red Hat.
How does software defined storage (SDS) fix problems with cloud storage?
Cloud actually means a lot of different things to different people. We view cloud in the context of an open hybrid cloud. On-prem you build a scale-out storage cloud, or you might want to move workloads to a public cloud. If you think about this more, you can take the traditional way of running storage that is “tin wrapped software” that is tightly entangled with hardware, but you can’t just take software and move it to the cloud. The fundamental requirement for any storage application to run on the cloud, be it private public hybrid, is that it has to be software defined. And having the open angle, so you can move your workload across multiple clouds so you aren’t locked into one cloud provider.
Red Hat has been a proponent of the open hybrid cloud. How can Red Hat Storage enable customers on their journey to the open hybrid cloud?
Our belief here around open hybrid cloud is that you should be able to move workloads across physical, virtual, private and public cloud environments. And storage is one key part of it that needs to be portable and moved across. One of the things we do with Red Hat Storage is we support both an on-prem and public cloud support for things like AWS. And the neat thing about our solution is, because it has POSIX compatibility, if you were to write a program running on-prem, you wouldn’t have to rewrite your program to run on the public cloud. It gives you seamlessly portability regardless of whether you’re running your app on the public or private cloud. It gives customers flexibility to start on the public cloud, then bring it in-house to grow, for example.
How does this differ from companies like EMC, what advantages does it offer over these big storage players?
The way we view the market is software defined is a huge differentiator, today with our Red Hat Storage Server 3 announcement, we’ve broadened the aperture. We had support for 60-70 x86 server platforms, with this release we have support for over 300 server hardware platforms. So the ability to give the customer to choice to run the software on industry standard x86 hardware is where customers really love our solution, they don’t want to be locked into proprietary hardware or anything that would hinder their ability to grow.
What is Red Hat’s definition of software defined storage, and how are the security mechanisms are improved and differentiated compared to previous releases?
There are different views on SDS. Whenever we talk to our customers they view SDS as fundamentally two pieces. One is the ability to leverage the hardware advancements going on in x86 servers. The other side is intelligence has to continue to move on the software side. Our view is the third element in addition to SDS is the ability to have community driven innovation, that’s what separates us from the rest of the software storage approach.
What innovations are your engineers working on to bring a smoother journey to a software defined data center?
Lot’s of things! Just in the context of the Red Hat Storage Server, there are cool things going on in the upstream. Any new innovation, technologies, happens upstream out in the open in the community. If you look at Gluster.org community you can see the things being worked on today. Some of the things being worked on now are things like craw detection, erasure coding, support for NFSB4. These are things that really help customers moved towards software defined data center, we take advantage of hardware innovation like flash erase, but also more and more intelligence moves to the software side of things.
What proof points do we have for the storage total cost of ownership compared to legacy methods or competitors?
There are lots of data that we work together with our partners, it’s not unfathomable where, for different hardware configurations at petabyte scale, you can get cost advantages of almost 33-44% cost advantages. As disk densities continue to increase and servers support more disk capacity, customers can leverage that capacity by having more software and hardware under management. As the innovation in the community continues to happen, our customers gain true advantage not just from technology standpoint but from an opportunity standpoint.
What is the future of the Red Hat Storage portfolio? Will there be a conscious effort to combine the Ceph and Gluster communities?
The Gluster and Ceph communities continue to thrive independently and very well. Over the last nine months we’ve seen almost 2,000,000 downloads of the Gluster and Ceph projects. And the innovation going on on both those projects will continue unabated. With respect to the product, now that the product Red Hat Storage Server 3 is out the door, and we released Inktank Ceph 1.2, we have the baseline today to get back to customers and partners with a consolidated vision of where this journey is going. If I were to look at the assets that we have, I think the Ceph/Inktank acquisition for me really rounded out our ability with Object and Blob.
Beyond capacity, how do you see workloads changing performance requirements of storage, and how does SDS evolve to address key metrics?
Capacity is absolutely one way to measure scale. The other side is performance. How do you really scale linearly without compromising on performance? Many ways to talk about this, but one data point is with some of the innovation going on in the hardware side like SSDs or Flash arrays, that really give us the platform to think about how can we leverage not just the scale aspect but also performance. We have lots of benchmarks available on our web site, but between RHSS2 and today’s version, what we’ve done in internal testing we’ve literally doubled the number of storage nodes from 64 to 128. What that means from a usable capacity standpoint, customers can build manage and have a cloud up to 19 PB of storage. Lot’s of data, but one way to bring that to the forefront, it equates to 129 billion photos on Facebook. That’s the level of scale we’re talking about. As we go along the scale journey we made sure not to compromise on performance.