[Linux-cluster] How to take down a CS/GFS setup with minimum downtime

Sævaldur Arnar Gunnarsson addi at hugsmidjan.is
Wed Nov 7 11:13:44 UTC 2007


Thanks for this Lon, I'm down to the last two node members and according
to cman_tool status I have two nodes, two votes and a quorum of two.
--
Nodes: 2
Expected_votes: 5
Total_votes: 2
Quorum: 2   
--

One of those nodes has the GFS filesystems mounted.
If I issue cman_tool leave remove on the other node will I run into any
problems on the GFS mounted node ? (for example, due to quorum)



On Mon, 2007-10-29 at 10:56 -0400, Lon Hohberger wrote:

> That should do it, yes.  Leave remove is supposed to decrement the
> quorum count, meaning you can go from 5..1 nodes if done correctly.  You
> can verify that the expected votes count decreases with each removal
> using 'cman_tool status'.
> 
> 
> If for some reason the above doesn't work, the alternative looks
> something like this:
>   * unmount the GFS volume + stop cluster on all nodes
>   * use gfs_tool to alter the lock proto to nolock
>   * mount on node 1.  copy out data.  unmount!
>   * mount on node 2.  copy out data.  unmount!
>   * ...
>   * mount on node 5.  copy out data.  unmount!
> 
> -- Lon
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 4581 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20071107/a218443d/attachment.bin>


More information about the Linux-cluster mailing list