[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Linux-cluster] Resource Groups



On Wed, 2005-11-30 at 10:46 +0000, Ben Yarwood wrote:
> I can't find any documentation on adding resource groups to the cluster.conf
> file.
> Can anyone point me in the right direction please or give me an example of
> an NFS service.
> 
> Regards
> Ben Yarwood

Hi Ben,

Give me a day or three and I will write a real howto for rgmanager.  It
is simple, but not very intuitive (if that makes any sense...).

Basically, a typical NFS service looks like this,

  <service ... >
    <fs ... >
      <nfsexport>  <!-- no attributes -->
        <nfsclient target="*" name="World" options="ro"/>
      </nfsexport>
    </fs>
    <ip ... />
  </service>

The goal for this design is full active-active NFS - where you can have
as many NFS services as you want moving around in the cluster at any
time completely independent of one another.  Due to some kernel bugs,
this does not currently work correctly in all cases, unfortunately (hard
ones... believe me, we are working on them).

Tangent.  You can ignore this next part/example if you do not want to
experiment with largely untested resources...

Contrast to single-NFS service (nfsserver) in the head branch of CVS -
which works generally (even issues SM_NOTIFY on the correct IPs...), but
has the limitation of only allowing *one* NFS server in the entire
cluster (*ouch*).

Example nfsserver implementation:

  <service ... >
    <fs ... >
      <nfsserver ... >
        <nfsclient ... />
        <nfsclient ... />
        <ip ... />
      </nfsserver>
    </fs>
  </service>

Ok, on to some general hints...

There is a pre-arranged start/stop ordering for resource types with
certain children.  With a service, the order is:

start:
  fs         <!-- mount ext2, ext3, etc. -->
  clusterfs  <!-- mount gfs -->
  netfs      <!-- mount an outside NFS export -->
  ip         <!-- bring up an IP -->
  script     <!-- User-scripts -->

stop:
  ip         <!-- bring down an IP -->
  script     <!-- User-scripts -->
  netfs      <!-- umount an outside NFS export -->
  clusterfs  <!-- umount gfs (no-op unless force_unmount) -->
  fs         <!-- umount ext2, ext3, etc. -->

There is no guaranteed ordering within a resource type if it has a
defined start/stop order (so, five <fs> direct descendents of <service>
may start/stop in any order), and no ordering guarantees among similar
or different resource types if there is no defined start/stop order.  

Instead, if you need ordering apart from the <service> child guarantees,
it is better to make children.  Children of a resource are always
started before the next resource at the same level of the tree.  A
common example of this is having a sub-mount point: mount /a, then
mount /a/b and /a/c for example.

Wrong:

  <service ... >
    <fs mountpoint="/a" ... />
    <fs mountpoint="/a/b" ... />
    <fs mountpoint="/a/c" ... />
  </service>

Correct:

  <service ... >
    <fs mountpoint="/a" ... >
      <fs mountpoint="/a/b" ... />
      <fs mountpoint="/a/c" ... />
    </fs>
  </service>

Similarly, some user applications *require* that the IP address be up at
the time the service starts and torn down *after* the application has
exited.

Wrong for this case (but correct for *most* applications!):

  <service ... >
    <fs mountpoint="/a" ... />
    <ip ... />
    <script ... />
  </service>

Correct:

  <service ... >
    <fs mountpoint="/a" ... />
    <ip ... >
      <script ... />
    </ip>
  </service>

This is kind of a "top-down" dependency for start, and a "bottom-up" for
stop.  In the above example, you may not stop the <ip> until the
<script> resource has successfully stopped, and you may not start the
<script> until after the <ip> has successfully started.

-- Lon


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]