[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] CLVMD without GFS



Hi,
 
From my understanding, any change request (lv, vg, pv...) should be blocked as long as a lock is held by another alive node in the cluster.
 
I mean the "exclusive flag" was here at the origin to address this. 
 
I think there is a sort of misconcept in LVM2 may be due to the fact that for many people assume a cluster is necessarly a "share-everything"  infrastructure (a la VMS).
 
This is the good approach for clustering file servers (NFS, CIFS), web servers, etc... where one would take advantage of load balancing user sessions accross multiple nodes.
 
The other need for clustering is to run "mono instance" databases that are not by concept (except Oracle RAC) designed to run accross multiple nodes.
 
In this case, only one node is holding the database instance with all of its storage, and putting it on a "share everything" cluster based on a Cluster filesystem (GFS) would imply :
 
     - performance penalty: each storage IO having to request the FS lock manager to be executed
         --> managing the locks at a lower level (VG and LV) wouldn't imply so as one the VG is exclusively activated no other node can remove the lock, every IO being done on a regular FS (ext3 for instance) without having to manage any kind of locks
     - Security issue :  To bypass this lock mechanism, one should start the clustered FS without locks, but this will certainly lead to FS corruption.
 
Brem



2009/7/28, Xinwei Hu <hxinwei gmail com>:
Hi Brem,

I guess the cause of the problem is using 'lvchange -ay'.

Clvmd actually does lock conversion underlying. So when you tried
'lvchange -ay',
the exclusive lock will be converted to an non-exclusive lock. And that means
the second try will success anyway.

2009/7/28 brem belguebli <brem belguebli gmail com>:
> Hi Rafael,
> On RHEL 5.3, locks thru DLM aren't reliable all the time, I've been able to
> activate a VG on a node of the cluster even if it was already activated
> exclusively on another node.
> Also, I've been able to activate a LV on a node not holding the exclusive
> lock on the VG by typing 2 times lvchange -a y /dev/VGX/lvoly .
> First try lvchange tells you there is a lock, second try activates it.
> Checking dlm debug (mount -t debugfs debug /debug) /debug/dlm/clvmd_locks
> gives a lock for nodeid 0 ....
> Brem
>
> 2009/7/27 Rafael Micó Miranda <rmicmirregs gmail com>
>>
>> Hi Brem
>>
>> So, does it work successfully? I made some testing before I submitted it
>> to the list and AFAIK i found no errors.
>>
>> What do you mean exactly with "some CLVM strange behaviours"? Could you
>> be more specific?
>>
>> I'm not subscribed to linux-lvm, please keep us informed through this
>> list.
>>
>> Thanks in advance. Cheers,
>>
>> Rafael
>>
>> El lun, 27-07-2009 a las 21:02 +0200, brem belguebli escribió:
>> > Hi Rafael,
>> >
>> > It works fine, well at least when not hiting some CLVM strange
>> > behaviours, that I'm able to replay by hand, so your script is
>> > allright.
>> >
>> > I'll post to linux-lvm what I could see.
>> >
>> > Brem
>> >
>> >
>> > 2009/7/21, brem belguebli <brem belguebli gmail com>:
>> >         Hola Rafael,
>> >
>> >         Thanks a lot, that'll avoid me going from scratch.
>> >
>> >         I'll have a look at them and keep you updated.
>> >
>> >         Brem
>> >
>> >
>> >
>> >         2009/7/21, Rafael Micó Miranda <rmicmirregs gmail com>:
>> >                 Hi Brem,
>> >
>> >                 El mar, 21-07-2009 a las 16:40 +0200, brem belguebli
>> >                 escribió:
>> >                 > Hi,
>> >                 >
>> >                 > That's what I 'm trying to do.
>> >                 >
>> >                 > If you mean lvm.sh, well, I've been playing with it,
>> >                 but it does some
>> >                 > "sanity" checks that are wierd
>> >                 >      1. It expects HA LVM to be setup (why such
>> >                 check if we want to
>> >                 >         use CLVM).
>> >                 >      2. it exits if it finds a CLVM VG  (kind of
>> >                 funny !)
>> >                 >      3. it exits if the lvm.conf is newer
>> >                 than /boot/*.img (about this
>> >                 >         one, we tend to prevent the cluster from
>> >                 automatically
>> >                 >         starting ...)
>> >                 > I was looking to find some doc on how to write my
>> >                 own resources, ie
>> >                 > CLVM resource that checks if the vg is clustered, if
>> >                 so by which node
>> >                 > is it exclusively held, and if the node is down to
>> >                 activate
>> >                 > exclusively the VG.
>> >                 >
>> >                 > If you have some good links to provide me, that'll
>> >                 be great.
>> >                 >
>> >                 > Thanks
>> >                 >
>> >                 >
>> >                 > 2009/7/21, Christine Caulfield
>> >                 <ccaulfie redhat com>:
>> >                 >         On 07/21/2009 01:11 PM, brem belguebli
>> >                 wrote:
>> >                 >                 Hi,
>> >                 >                 When creating the VG by default
>> >                 clustered, you
>> >                 >                 implicitely assume that
>> >                 >                 it will be used with a clustered FS
>> >                 on top of it (gfs,
>> >                 >                 ocfs, etc...)
>> >                 >                 that will handle the active/active
>> >                 mode.
>> >                 >                 As I do not intend to use GFS in
>> >                 this particular case,
>> >                 >                 but ext3 and raw
>> >                 >                 devices, I need to make sure the vg
>> >                 is exclusively
>> >                 >                 activated on one
>> >                 >                 node, preventing the other nodes to
>> >                 access it unless
>> >                 >                 it is the failover
>> >                 >                 procedure (node holding the VG
>> >                 crashed) and then re
>> >                 >                 activate it
>> >                 >                 exclusively on the failover node.
>> >                 >                 Thanks
>> >                 >
>> >                 >
>> >                 >         In that case you probably ought to be using
>> >                 rgmanager to do
>> >                 >         the failover for you. It has a script for
>> >                 doing exactly
>> >                 >         this :-)
>> >                 >
>> >                 >         Chrissie
>> >                 >
>> >                 >
>> >                 >         --
>> >                 >         Linux-cluster mailing list
>> >                 >         Linux-cluster redhat com
>> >                 >
>> >                 https://www.redhat.com/mailman/listinfo/linux-cluster
>> >                 >
>> >                 >
>> >                 > --
>> >                 > Linux-cluster mailing list
>> >                 > Linux-cluster redhat com
>> >                 >
>> >                 https://www.redhat.com/mailman/listinfo/linux-cluster
>> >
>> >                 Please, check this link:
>> >
>> >
>> > https://www.redhat.com/archives/cluster-devel/2009-June/msg00020.html
>> >
>> >                 I found exactly the same problem as you, and i
>> >                 developed the
>> >                 "lvm-cluster.sh" script to solve the needs I had. You
>> >                 can find the
>> >                 script on the last message of the thread.
>> >
>> >                 I submitted it to make it part of the main project,
>> >                 but i have no news
>> >                 about that yet.
>> >
>> >                 I hope this helps.
>> >
>> >                 Cheers,
>> >
>> >                 Rafael
>> >
>> >                 --
>> >                 Rafael Micó Miranda
>> >
>> >                 --
>> >                 Linux-cluster mailing list
>> >                 Linux-cluster redhat com
>> >                 https://www.redhat.com/mailman/listinfo/linux-cluster
>> >
>> >
>> > --
>> > Linux-cluster mailing list
>> > Linux-cluster redhat com
>> > https://www.redhat.com/mailman/listinfo/linux-cluster
>> --
>> Rafael Micó Miranda
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster redhat com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
> --
> Linux-cluster mailing list
> Linux-cluster redhat com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>

--
Linux-cluster mailing list
Linux-cluster redhat com
https://www.redhat.com/mailman/listinfo/linux-cluster


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]