[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] CLVM and AoE

Hey Bowie,

Wow.. That's perfect.  Thanks for the response.

I have a question about whether GFS is a requirement.. Since each lv is a separate partition mounted on xen, does GFS make sense, or can we use ext3/xfs/etc.?


Bowie Bailey wrote:
Aaron Stewart wrote:
I'm currently in process of setting up a Coraid ATA over Ethernet
device as a backend storage for multiple systems that export
individual partitions to Xen virtual servers.  In our discussions
with Coraid, they suggested looking into CLVM in order to handle this.

Obviously, I have some questions.. :)

- Has anyone used this kind of setup?  I have very little experience
with Redhat's cluster management, but have a fairly high level of
expertise overall in this arena.

I don't know anything about Xen, but I am using this same basic setup
on my systems.

- How does management of LVM logical volumes occur?  Do we need to
maintain one server that administers the volume group?

The management is distributed.  You can manage the cluster and volume
groups from any node.

- What kind of pitfalls should we be aware of?

Some people have complained about throughput issues with GFS.  Our
application doesn't require high throughput, so I can't comment on
this.  I haven't found any issues in my testing so far.

Can anyone point to any experience or any HOWTO's that discuss setting
something like this up?

There are a few documents, but most of the ones that I've seen are out
of date.  If you have specific questions, you can ask here.

If you don't have it already, here is the yum config with the current
cluster RPMs for CentOS.  Just drop it in a file in /etc/yum.repos.d/.
Note that the current cluster RPMs are for the new 2.6.9-34.EL kernel.

name=CentOS-4 - CSGFS

The only thing you need to build from source is the AoE driver from

Here's the setup:

1. Coraid SR1520 configured in one lblade, exported via AoE on a
dedicated storage network as one LUN
2. Centos4.2 on all cluster nodes
3. logical volumes get masked when getting passed into Xen, so on the
Dom0 controller it should look like /dev/VolGroup00/{xenvmID} (which
shows up in the virtual as /dev/sda1)
4. only one host need access to a given logical volume at any given
time.  If migration needs to occur, the volume should be unmounted and
remounted on another physical system.

This can be done, but the cluster will not do it for you.  Each
logical volume can be accessed by as many nodes as you need.  Note
that you need one GFS journal per node that needs simultaneous access.

5. Despite the fact that AoE is a layer 4 protocol, apparently it can
coexist with IP on the same network interface, so we can transport
cluster metadata over the same interface.  Barring that, there is a
second (public) interface on each box.
6. We want to avoid a single point of failure (such as a second AoE
server that exports luns from lvm lv's)

Now that DLM is the recommended locking manager, everything is
distributed.  Your only single point of failure is the CoRaid box.

fn:Aaron Stewart
org:FireBright, Inc.
adr:Suite 120-112;;4460 Natomas Blvd.;Sacramento;CA;95835;US
email;internet:aaron firebright com
title:Director of Engineering

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]