[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Quorum disk over RAID software device



Thomas Meller wrote:
You're right, I am unclear.

Some years ago, we tried two versions: storage-based
mirroring and host-based mirroring. As the processes were
too complicated in our company we decided to mirror the
disks host-based. So currently there is a /dev/md0
(simplified) consisting of sda (in Bern) and sdb (in Zurich),
and each node has it's own root-fs exclusively.

Since MD RAID cannot be mounted simultaneously from more than one place, I can only assume that you have a fail-over rather than an active-active solution.

This cannot work with a shared GFS, as there are several
machines doing updates on the FS and no central instance
does always know the current state of the device's contents,
thus no host-mirroring possible.

I thought you have an MD device doing mirroring...

You are talking of storage-based mirrors. In case of a
failure, we would have to direct the storage system to use
the second mirror as primary and direct our nodes to write
on sdb instead of sda.

Right - so you are using it as active-passive, then.

That will involve controlling the storage from our machines
(our storage people will love the idea) and installing the
storage-specific software on them.

Or you can have DRBD do the mirroring and fail-over handling for you on whatever device(s) you have exposed to the servers.

If the Hardware in use changes, we need to re-engineer this
solution and adapt to the new storage manufacturer's
philosophy, if at all possible.

Well, you'll always need to at least make sure you have a suitable SCSI driver available - unless you use something nice and open like iSCSI SANs.

I still have a third opportunity. I can use Qlocig's driver-based
multipathing and keep using host-based mirroring instead of
using dm-multipath, which currently prevents me from setting up
raid-devices as root-fs.

I'm still not sure how all these relate in your setup. Are you saying that you are using the qlogic multi-path driver pointing at two different SANs while the SANs themselves are sorting out the synchronous real-time mirroring between them?

Well, that will work, but is somewhat ugly.

So far, I had only a short glimpse on OSR. I think I will need
to dive deeper.

It sounds like you'll need to add support for qlogic multi-path proprietary stuff to OSR before it'll do exactly what you want, but other than that, the idea behind it is to enable you have have a shared rootfs on a suitable cluster file system (GFS, OCFS, GlusterFS, etc.). It's generally useful when you need a big fat initrd (although there has been a significant effort to make the initrd go on a serious diet over time) to bootstrap things such as RHCS components, block device drivers (e.g. DRBD), or file systems that need a fuller environment to start up than a normal initrd (e.g. GlusterFS, things that need glibc, etc.).

Gordan


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]