[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] RAIDing a CLVM?

Patton, Matthew F, CTR, OSD-PA&E wrote:

Classification: UNCLASSIFIED

call this a screwy idea and I can't seem to find a relevant thread.

my cluster is made up of machines each with a SINGLE hard drive. I want to use 1/2 of each disk and pool them all together and make a RAID 10 or 5 set. And then be able to access said volume RW from any node and should a node die, the filesystem keeps running.

I can't think of a way to combine (C)LVM, GFS, GNBD, and MD (software RAID) and make it work unless just one of the nodes becomes the MD master and then just exports it via NFS. Can it be done? Do commercial options exist to pull off this trick?

You _can_ use lvcreate -m to create a CLVM'ed mirror and GFS across that. It's rather young still though, I'm not sure I'd do anything but play around with it at the moment. It requires at least 3 pv's, and you really don't get to specify where things are laid out.
   # lvcreate -m 1 -n mirror1 --alloc anywhere -L 4G vg

|I blogged a bit on it:


Does anyone know what happened to dd-raid? Daniel Phillips at RedHat had a really neat raid3.5 thing going that seemed a _lot_ like ZFS in many respects, with the benefit of being cluster aware.

My backup plan is to define a shared SAN volume intended for high WRITE volume, partition it, and each node that needs to scribble manages it's own partition+filesystem. And it's up to me to make sure no 2 nodes try to own the same piece.

I too would like to see a commodity solution for network mirrored storage used in a cluster configuration without some form of shared media between the cluster nodes.

My experiments found that:
1) AoE doesn't like it when a target disappears. Restarting vblade isn't enough, you get to reload the aoe driver on the other node.
2) Xen 3.0 has some serious UDP checksum corruption issues.

Hope this helps..

- Ian C. Blenke <ian blenke com> http://ian.blenke.com/

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]