On Thu, Jan 13, 2005 at 04:40:23PM +0300, Sergey wrote: > > >> I have 2 nodes - hp1 and hp2. Any of nodes have Integrated Lights-Out > >> with ROM Version: 1.55 - 04/16/2004. > >> > > > The nodes in the servers config line for gulm form a mini-cluster of > > sorts. There must be quorum (51%) of nodes present in this mini-cluster > > for things to continue. > > > You must have two of the three servers up and running so that the > > mini-cluster has quorum, which then will alow the other nodes to > > connect. > > I have only 2 nodes and I can't get quorum. Should I use Single Lock > Manager (SLM), when one node is master and another is slave? > > But in this case if master goes down slave loses access to common file > system, and it systemlog looks like this: Correct. That is the behavor of gulm in SLM mode. [snip] > If master boots up after some time happens nothing - slave does not > try to connect. Again correct, in SLM mode, the lock state was lost, so there is nothing for the slave to reconnect to. For gulm, you need atleast three nodes to get RLM mode. The third gulm node does not need to run anything but gulm, and can be configured from a file using an option to ccsd. You just need to make sure the configs are the same on all three nodes. > What should happens further and in what order? > > > > You really should test that fencing works by running > > fence_node <node name> for each node in your cluster before running > > lock_gulmd. This makes sure that fencing is setup and working > > correctly. > > > Do that, and once you've verified that fencing is correct (without > > lock_gulmd running) try things again with lock_gulmd. > > Result of command > fence_node NODENAME > is reboot of NODENAME. Is it right? If you are using a fencing agent that power cycles the node. (so, sometimes yes. fence_ilo will reboot the node.) -- Michael Conrad Tadpol Tilstra IIss llooccaall eecchhoo oonn??
Description: PGP signature