[rhos-list] RHOS and Ceph

Steven Ellis sellis at redhat.com
Fri Apr 19 21:46:41 UTC 2013


Wow some great discussion.

I'm with Paul. Lets look at some real SAN hardware for big I/O at the
moment. A lot of customers already have that for their existing VMware /
RHEV backends.

Then RHS (Gluster) is a great fit for object and other lower I/O use cases.

After being at Linux.conf.au back in January there was a great deal of
perception that Ceph is the default or is required for OpenStack and it
can be quite a struggle to overcome that perception once it takes hold.

I'm open to other suggestions for positioning RHOS on different storage
backends.

Steve

On 04/20/2013 06:16 AM, Paul Robert Marino wrote:
> Um hum
> If you want hi block level IO performance why not use one of the many
> SAN or NAS drivers? Grizzly has quite a few of them, and honestly
> that's the only way you will get any real IO performance.
>
>
>
> -- Sent from my HP Pre3
>
> ------------------------------------------------------------------------
> On Apr 19, 2013 1:11 PM, Joey McDonald <joey at scare.org> wrote:
>
> Simply enabling support for it is not the same as supporting it. Ceph
> is already supported via the cephfs fuse-based file system. I think
> the concepts are similar.
>
> Two things are needed: kernel module for rbd and ceph hooks in kvm.
> Then, let the ceph community offer 'support'.
>
> Is this not what was done for gluster before they were acquired? It is
> Linux after all... kumbaya. 
>
>
>
> On Fri, Apr 19, 2013 at 10:36 AM, Pete Zaitcev <zaitcev at redhat.com
> <mailto:zaitcev at redhat.com>> wrote:
>
>     On Fri, 19 Apr 2013 18:03:12 +1200
>     Steven Ellis <sellis at redhat.com <mailto:sellis at redhat.com>> wrote:
>
>     > One of their key questions is when (note when, not if) will Red
>     Hat be
>     > shipping Ceph as part of their Enterprise Supported Open Stack
>     > environment. From their perspective RHS isn't a suitable scalable
>     > backend for all their Open Stack use cases, in particular high
>     > performance I/O block
>
>     Okay, since you ask, here's my take, as an engineer.
>
>     Firstly, I would be interested in hearing more. If someone made up
>     their
>     mind in such terms there's no dissuading them. But if they have a
>     rational
>     basis for saying that "high performance I/O block" in Gluster is
>     somehow
>     deficient, it would be very interesting to learn the details.
>
>     My sense of this is that we're quite unlikely to offer a support
>     for Ceph any time soon. First, nobody so far presented a credible case
>     for it, as far as I know, and second, we don't have the expertise.
>
>     I saw cases like that before, in a sense that customers come to us and
>     think they have all the answers and we better do as we're told.
>     This is difficult because on the one hand customer is always right,
>     but on the other hand we always stand behind our supported product.
>     It happened with reiserfs and XFS. But we refused to support reiserfs,
>     while we support XFS. The key difference is that reiserfs was junk,
>     and XFS is not.
>
>     That said, XFS took a very long time to establish -- years. We had to
>     hire Dave Cinner to take care of it. Even if the case for Ceph gains
>     arguments, it takes time to establish in-house expertise that we can
>     offer as a valuable service to customers. Until that time selling
>     Ceph would be irresponsible.
>
>     The door is certainly open to it. Make a rational argument, be
>     patient,
>     and see what comes out.
>
>     Note that a mere benchmark for "high performance I/O block" isn't
>     going
>     to cut it. Reiser was beating our preferred solution, ext3. But in the
>     end we could not recommend a filesystem that ate customer data,
>     and stuck
>     with ext3 despite the lower performance. Not saying Ceph is junk
>     at all,
>     but you need a better argument against GlusterFS.
>
>     -- Pete
>
>     _______________________________________________
>     rhos-list mailing list
>     rhos-list at redhat.com <mailto:rhos-list at redhat.com>
>     https://www.redhat.com/mailman/listinfo/rhos-list
>
>
>
>
> _______________________________________________
> rhos-list mailing list
> rhos-list at redhat.com
> https://www.redhat.com/mailman/listinfo/rhos-list


-- 
Steven Ellis
Solution Architect - Red Hat New Zealand <http://www.redhat.co.nz/>
*T:* +64 9 927 8856
*M:* +64 21 321 673
*E:* sellis at redhat.com <mailto:sellis at redhat.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/rhos-list/attachments/20130420/085a469a/attachment.htm>


More information about the rhos-list mailing list