[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Linux clustering (one-node), GFS, iSCSI, clvmd (lock problem)

On Tue, 16 Oct 2007, Paul Risenhoover wrote:

> > > > I admit I don't know much about clustering, but from the evidence I > > see, > > the problem appears to be isolated to clvmd and iSCSI, if only for > > the > > fact that my direct-attached clustered volumes don't exhibit the > > symptoms. > > Will let LVM folks comment on rest of the issues. However, if you intend > to use this as single node case, are you aware that both GFS and GFS2 > support a "lock_nolock" protocol that doesn't require CLVMD
? It can be run on a plain storage device (say /dev/sda1) and
> doesn't have any locking overhead. Do a "man gfs_mkfs" and search for > "LockProtoName". A sample mkfs-mount command looks like the following:

 Err - possibly a misunderstanding, but GFS/GFS2 doesn't require LVM/CLVM.
 You can run on a raw device without volume management.
Ouch.  Good to know.  If I use raw devices can I grow and shrink volumes?

Sure - assuming your underlying volume provision supports it (e.g. if your iSCSI SAN allows you to grow volumes - it's a pretty lame SAN if it doesn't support something that is so trivial to implement).

The procedure would be to grow the block device on the SAN, and then use the file system growing utility (as you would do if you enlarged the volume using LVM) to make the FS expand onto the new, enlarged block device.

LVM is, IMNSHO, a solution for a non-problem now that we have online RAID growing capability and SANs can grow volumes at the press of a button.

The specific need is to be able to take a physical devices out of service (ie, one of my iSCSI devices) so that I can restripe it or replace it.

LVM won't help you with that.

What you really need to do is have two mirrored fail-over SANs, so you can down one, do whatever you need to do, re-mirror it, then repeat on the other one.

Here's another scenario: I've got two existing physical devices of ~3TB each, both are members of the nasvg_00 volume group (using clvmd), plus a third physical device that I'm trying to bring online. Is there a migration path that allows me to format the new physical device with gfs/raw, join it to the exiting gfs file system, and then migrate the other physical devices (one by one) to a gfs/raw format?

You cannot "merge" two existing partitions into one. The file system won't be a single file system that spans both. You'd still have to put your data somewhere else, merge them, grow the one file system to the additional device(s) and the grow the FS.

In practice, however, a decent SAN solution (one you could build with COTS hardware and OSS Linux tools) will let you grow a SAN pretty much indefinitely, in terms of space. Bandwidth capacity of your ethernet will become an issue before the space becomes an issue.

Note that on RHEL4, there is a partitioning issue - for some reason, fdisk won't let you make partitions bigger than about 1TB. This may have been fixed on RHEL5, I don't know, I haven't tried it. But this isn't necessarily a problem because you can just use raw block devices, and most file systems can cope nowdays.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]