[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Linux-cluster] RHEL5 CLVMD hang



Hi,

we have a 20 node cluster with clvmd. It was working fine, but now i can't 
interact with clvmd. This problem aparently started after a node reboot. I've 
rebooted that node after and other 2 after and it that problem remains.

When the machine is starting it stops in clvmd and doing a ps fax i can see 
that its stopped in vgscan.

Machine xen1, xen3 and xen18 are all stopped. Machine xen18 was the first 
rebooted and the problem started after that.

[root xen2 ~]# cman_tool services
type             level name     id       state
fence            0     default  00010001 none
[2 4 5 6 7 8 9 10 11 12 13 14 15 17 18 19 20]
dlm              1     clvmd    00010002 none
[2 4 5 6 7 8 9 10 11 12 13 14 15 17 18 19 20]

cluster.conf:
attached

uname:
Linux xen9.dc.test.pt 2.6.18-8.1.3.el5xen #1 SMP Mon Apr 30 20:26:10 EDT 2007 
x86_64 x86_64 x86_64 GNU/Linux

cman-2.0.60-1.el5
lvm2-cluster-2.02.16-3.el5

lvm.conf:
attached

[root xen9 ~]# clustat
Member Status: Quorate

  Member Name                        ID   Status
  ------ ----                        ---- ------
  xen1.dc.test.pt                      1 Offline
  xen2.dc.test.pt                      2 Online
  xen3.dc.test.pt                      3 Offline
  xen4.dc.test.pt                      4 Online
  xen5.dc.test.pt                      5 Online
  xen6.dc.test.pt                      6 Online
  xen7.dc.test.pt                      7 Online
  xen8.dc.test.pt                      8 Online
  xen9.dc.test.pt                      9 Online, Local
  xen10.dc.test.pt                    10 Online
  xen11.dc.test.pt                    11 Online
  xen12.dc.test.pt                    12 Online
  xen13.dc.test.pt                    13 Online
  xen14.dc.test.pt                    14 Online
  xen17.dc.test.pt                    15 Online
  xen18.dc.test.pt                    16 Offline
  xen19.dc.test.pt                    17 Online
  xen20.dc.test.pt                    18 Online
  xen21.dc.test.pt                    19 Online
  xen22.dc.test.pt                    20 Online

Attached i also send the result of ps fax in xen2.

Any info?
Thanks
Nuno Fernandes
<?xml version="1.0"?>
<cluster config_version="3" name="clt_test">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="xen1.dc.test.pt" nodeid="1" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen1"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen2.dc.test.pt" nodeid="2" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen2"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen3.dc.test.pt" nodeid="3" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen3"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen4.dc.test.pt" nodeid="4" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen4"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen5.dc.test.pt" nodeid="5" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen5"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen6.dc.test.pt" nodeid="6" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen6"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen7.dc.test.pt" nodeid="7" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen7"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen8.dc.test.pt" nodeid="8" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen8"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen9.dc.test.pt" nodeid="9" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen9"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen10.dc.test.pt" nodeid="10" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen10"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen11.dc.test.pt" nodeid="11" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen11"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen12.dc.test.pt" nodeid="12" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen12"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen13.dc.test.pt" nodeid="13" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen13"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen14.dc.test.pt" nodeid="14" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen14"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen17.dc.test.pt" nodeid="15" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen17"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen18.dc.test.pt" nodeid="16" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen18"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen19.dc.test.pt" nodeid="17" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen19"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen20.dc.test.pt" nodeid="18" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen20"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen21.dc.test.pt" nodeid="19" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen21"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="xen22.dc.test.pt" nodeid="20" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="xen22"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman/>
        <fencedevices>
                <fencedevice agent="fence_ilo" hostname="xen1_ilo" login="fenceuser" name="xen1" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen2_ilo" login="fenceuser" name="xen2" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen3_ilo" login="fenceuser" name="xen3" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen4_ilo" login="fenceuser" name="xen4" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen5_ilo" login="fenceuser" name="xen5" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen6_ilo" login="fenceuser" name="xen6" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen7_ilo" login="fenceuser" name="xen7" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen8_ilo" login="fenceuser" name="xen8" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen9_ilo" login="fenceuser" name="xen9" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen10_ilo" login="fenceuser" name="xen10" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen11_ilo" login="fenceuser" name="xen11" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen12_ilo" login="fenceuser" name="xen12" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen13_ilo" login="fenceuser" name="xen13" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen14_ilo" login="fenceuser" name="xen14" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen17_ilo" login="fenceuser" name="xen17" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen18_ilo" login="fenceuser" name="xen18" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen19_ilo" login="fenceuser" name="xen19" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen20_ilo" login="fenceuser" name="xen20" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen21_ilo" login="fenceuser" name="xen21" passwd="fencepass"/>
                <fencedevice agent="fence_ilo" hostname="xen22_ilo" login="fenceuser" name="xen22" passwd="fencepass"/>
        </fencedevices>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>
# This is an example configuration file for the LVM2 system.
# It contains the default settings that would be used if there was no
# /etc/lvm/lvm.conf file.
#
# Refer to 'man lvm.conf' for further information including the file layout.
#
# To put this file in a different directory and override /etc/lvm set
# the environment variable LVM_SYSTEM_DIR before running the tools.


# This section allows you to configure which block devices should
# be used by the LVM system.
devices {

    # Where do you want your volume groups to appear ?
    dir = "/dev"

    # An array of directories that contain the device nodes you wish
    # to use with LVM2.
    scan = [ "/dev/mapper" ]

    # A filter that tells LVM2 to only use a restricted set of devices.
    # The filter consists of an array of regular expressions.  These
    # expressions can be delimited by a character of your choice, and
    # prefixed with either an 'a' (for accept) or 'r' (for reject).
    # The first expression found to match a device name determines if
    # the device will be accepted or rejected (ignored).  Devices that
    # don't match any patterns are accepted.

    # Be careful if there there are symbolic links or multiple filesystem
    # entries for the same device as each name is checked separately against
    # the list of patterns.  The effect is that if any name matches any 'a'
    # pattern, the device is accepted; otherwise if any name matches any 'r'
    # pattern it is rejected; otherwise it is accepted.

    # Don't have more than one filter line active at once: only one gets used.

    # Run vgscan after you change this parameter to ensure that
    # the cache file gets regenerated (see below).
    # If it doesn't do what you expect, check the output of 'vgscan -vvvv'.


    # By default we accept every block device:
    filter = [ "r/sd.*/", "r/disk/", "a/.*/" ]

    # Exclude the cdrom drive
    # filter = [ "r|/dev/cdrom|" ]

    # When testing I like to work with just loopback devices:
    # filter = [ "r/sd.*/", "a/.*/" ]

    # Or maybe all loops and ide drives except hdc:
    # filter =[ "a|loop|", "r|/dev/hdc|", "a|/dev/ide|", "r|.*|" ]

    # Use anchors if you want to be really specific
    # filter = [ "a|^/dev/hda8$|", "r/.*/" ]

    # The results of the filtering are cached on disk to avoid
    # rescanning dud devices (which can take a very long time).  By
    # default this cache file is hidden in the /etc/lvm directory.
    # It is safe to delete this file: the tools regenerate it.
    cache = "/etc/lvm/.cache"

    # You can turn off writing this cache file by setting this to 0.
    write_cache_state = 1

    # Advanced settings.

    # List of pairs of additional acceptable block device types found
    # in /proc/devices with maximum (non-zero) number of partitions.
    # types = [ "fd", 16 ]

    # If sysfs is mounted (2.6 kernels) restrict device scanning to
    # the block devices it believes are valid.
    # 1 enables; 0 disables.
    sysfs_scan = 1

    # By default, LVM2 will ignore devices used as components of
    # software RAID (md) devices by looking for md superblocks.
    # 1 enables; 0 disables.
    md_component_detection = 0
}

# This section that allows you to configure the nature of the
# information that LVM2 reports.
log {

    # Controls the messages sent to stdout or stderr.
    # There are three levels of verbosity, 3 being the most verbose.
    verbose = 0

    # Should we send log messages through syslog?
    # 1 is yes; 0 is no.
    syslog = 1

    # Should we log error and debug messages to a file?
    # By default there is no log file.
    #file = "/var/log/lvm2.log"

    # Should we overwrite the log file each time the program is run?
    # By default we append.
    overwrite = 0

    # What level of log messages should we send to the log file and/or syslog?
    # There are 6 syslog-like log levels currently in use - 2 to 7 inclusive.
    # 7 is the most verbose (LOG_DEBUG).
    level = 0

    # Format of output messages
    # Whether or not (1 or 0) to indent messages according to their severity
    indent = 1

    # Whether or not (1 or 0) to display the command name on each line output
    command_names = 0

    # A prefix to use before the message text (but after the command name,
    # if selected).  Default is two spaces, so you can see/grep the severity
    # of each message.
    prefix = "  "

    # To make the messages look similar to the original LVM tools use:
    #   indent = 0
    #   command_names = 1
    #   prefix = " -- "

    # Set this if you want log messages during activation.
    # Don't use this in low memory situations (can deadlock).
    # activation = 0
}

# Configuration of metadata backups and archiving.  In LVM2 when we
# talk about a 'backup' we mean making a copy of the metadata for the
# *current* system.  The 'archive' contains old metadata configurations.
# Backups are stored in a human readeable text format.
backup {

    # Should we maintain a backup of the current metadata configuration ?
    # Use 1 for Yes; 0 for No.
    # Think very hard before turning this off!
    backup = 1

    # Where shall we keep it ?
    # Remember to back up this directory regularly!
    backup_dir = "/etc/lvm/backup"

    # Should we maintain an archive of old metadata configurations.
    # Use 1 for Yes; 0 for No.
    # On by default.  Think very hard before turning this off.
    archive = 1

    # Where should archived files go ?
    # Remember to back up this directory regularly!
    archive_dir = "/etc/lvm/archive"

    # What is the minimum number of archive files you wish to keep ?
    retain_min = 10

    # What is the minimum time you wish to keep an archive file for ?
    retain_days = 30
}

# Settings for the running LVM2 in shell (readline) mode.
shell {

    # Number of lines of history to store in ~/.lvm_history
    history_size = 100
}


# Miscellaneous global LVM2 settings
global {
    library_dir = "/usr/lib64"

    # The file creation mask for any files and directories created.
    # Interpreted as octal if the first digit is zero.
    umask = 077

    # Allow other users to read the files
    #umask = 022

    # Enabling test mode means that no changes to the on disk metadata
    # will be made.  Equivalent to having the -t option on every
    # command.  Defaults to off.
    test = 0

    # Whether or not to communicate with the kernel device-mapper.
    # Set to 0 if you want to use the tools to manipulate LVM metadata
    # without activating any logical volumes.
    # If the device-mapper kernel driver is not present in your kernel
    # setting this to 0 should suppress the error messages.
    activation = 1

    # If we can't communicate with device-mapper, should we try running
    # the LVM1 tools?
    # This option only applies to 2.4 kernels and is provided to help you
    # switch between device-mapper kernels and LVM1 kernels.
    # The LVM1 tools need to be installed with .lvm1 suffices
    # e.g. vgscan.lvm1 and they will stop working after you start using
    # the new lvm2 on-disk metadata format.
    # The default value is set when the tools are built.
    # fallback_to_lvm1 = 0

    # The default metadata format that commands should use - "lvm1" or "lvm2".
    # The command line override is -M1 or -M2.
    # Defaults to "lvm1" if compiled in, else "lvm2".
    # format = "lvm1"

    # Location of proc filesystem
    proc = "/proc"

    # Type of locking to use. Defaults to local file-based locking (1).
    # Turn locking off by setting to 0 (dangerous: risks metadata corruption
    # if LVM2 commands get run concurrently).
    # Type 2 uses the external shared library locking_library.
    # Type 3 uses built-in clustered locking.
    locking_type = 3

    # If using external locking (type 2) and initialisation fails,
    # with this set to 1 an attempt will be made to use the built-in
    # clustered locking.
    # If you are using a customised locking_library you should set this to 0.
    fallback_to_clustered_locking = 1

    # If an attempt to initialise type 2 or type 3 locking failed, perhaps
    # because cluster components such as clvmd are not running, with this set
    # to 1 an attempt will be made to use local file-based locking (type 1).
    # If this succeeds, only commands against local volume groups will proceed.
    # Volume Groups marked as clustered will be ignored.
    fallback_to_local_locking = 1

    # Local non-LV directory that holds file-based locks while commands are
    # in progress.  A directory like /tmp that may get wiped on reboot is OK.
    locking_dir = "/var/lock/lvm"

    # Other entries can go here to allow you to load shared libraries
    # e.g. if support for LVM1 metadata was compiled as a shared library use
    #   format_libraries = "liblvm2format1.so"
    # Full pathnames can be given.

    # Search this directory first for shared libraries.
    #   library_dir = "/lib"

    # The external locking library to load if locking_type is set to 2.
    #   locking_library = "liblvm2clusterlock.so"
}

activation {
    # Device used in place of missing stripes if activating incomplete volume.
    # For now, you need to set this up yourself first (e.g. with 'dmsetup')
    # For example, you could make it return I/O errors using the 'error'
    # target or make it return zeros.
    missing_stripe_filler = "/dev/ioerror"

    # How much stack (in KB) to reserve for use while devices suspended
    reserved_stack = 256

    # How much memory (in KB) to reserve for use while devices suspended
    reserved_memory = 8192

    # Nice value used while devices suspended
    process_priority = -18

    # If volume_list is defined, each LV is only activated if there is a
    # match against the list.
    #   "vgname" and "vgname/lvname" are matched exactly.
    #   "@tag" matches any tag set in the LV or VG.
    #   "@*" matches if any tag defined on the host is also set in the LV or VG
    #
    # volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ]

    # Size (in KB) of each copy operation when mirroring
    mirror_region_size = 512

    # 'mirror_image_fault_policy' and 'mirror_log_fault_policy' define
    # how a device failure affecting a mirror is handled.
    # A mirror is composed of mirror images (copies) and a log.
    # A disk log ensures that a mirror does not need to be re-synced
    # (all copies made the same) every time a machine reboots or crashes.
    #
    # In the event of a failure, the specified policy will be used to
    # determine what happens:
    #
    # "remove" - Simply remove the faulty device and run without it.  If
    #            the log device fails, the mirror would convert to using
    #            an in-memory log.  This means the mirror will not
    #            remember its sync status across crashes/reboots and
    #            the entire mirror will be re-synced.  If a
    #            mirror image fails, the mirror will convert to a
    #            non-mirrored device if there is only one remaining good
    #            copy.
    #
    # "allocate" - Remove the faulty device and try to allocate space on
    #            a new device to be a replacement for the failed device.
    #            Using this policy for the log is fast and maintains the
    #            ability to remember sync state through crashes/reboots.
    #            Using this policy for a mirror device is slow, as it
    #            requires the mirror to resynchronize the devices, but it
    #            will preserve the mirror characteristic of the device.
    #            This policy acts like "remove" if no suitable device and
    #            space can be allocated for the replacement.
    #            Currently this is not implemented properly and behaves
    #            similarly to:
    #
    # "allocate_anywhere" - Operates like "allocate", but it does not
    #            require that the new space being allocated be on a
    #            device is not part of the mirror.  For a log device
    #            failure, this could mean that the log is allocated on
    #            the same device as a mirror device.  For a mirror
    #            device, this could mean that the mirror device is
    #            allocated on the same device as another mirror device.
    #            This policy would not be wise for mirror devices
    #            because it would break the redundant nature of the
    #            mirror.  This policy acts like "remove" if no suitable
    #            device and space can be allocated for the replacement.

    mirror_log_fault_policy = "allocate"
    mirror_device_fault_policy = "remove"
}


####################
# Advanced section #
####################

# Metadata settings
#
# metadata {
    # Default number of copies of metadata to hold on each PV.  0, 1 or 2.
    # You might want to override it from the command line with 0
    # when running pvcreate on new PVs which are to be added to large VGs.

    # pvmetadatacopies = 1

    # Approximate default size of on-disk metadata areas in sectors.
    # You should increase this if you have large volume groups or
    # you want to retain a large on-disk history of your metadata changes.

    # pvmetadatasize = 255

    # List of directories holding live copies of text format metadata.
    # These directories must not be on logical volumes!
    # It's possible to use LVM2 with a couple of directories here,
    # preferably on different (non-LV) filesystems, and with no other
    # on-disk metadata (pvmetadatacopies = 0). Or this can be in
    # addition to on-disk metadata areas.
    # The feature was originally added to simplify testing and is not
    # supported under low memory situations - the machine could lock up.
    #
    # Never edit any files in these directories by hand unless you
    # you are absolutely sure you know what you are doing! Use
    # the supplied toolset to make changes (e.g. vgcfgrestore).

    # dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ]
#}

# Event daemon
#
# dmeventd {
    # mirror_library is the library used when monitoring a mirror device.
    #
    # "libdevmapper-event-lvm2mirror.so" attempts to recover from failures.
    # It removes failed devices from a volume group and reconfigures a
    # mirror as necessary.
    #
    # mirror_library = "libdevmapper-event-lvm2mirror.so"
#}
[root xen2 ~]# ps fax
  PID TTY      STAT   TIME COMMAND
    1 ?        Ss     0:02 init [3]
    2 ?        S      0:46 [migration/0]
    3 ?        SN     0:01 [ksoftirqd/0]
    4 ?        S      0:00 [watchdog/0]
    5 ?        S      1:37 [migration/1]
    6 ?        SN     0:03 [ksoftirqd/1]
    7 ?        S      0:00 [watchdog/1]
    8 ?        S      1:27 [migration/2]
    9 ?        SN     3:45 [ksoftirqd/2]
   10 ?        S      0:00 [watchdog/2]
   11 ?        S      1:24 [migration/3]
   12 ?        SN     0:19 [ksoftirqd/3]
   13 ?        S      0:00 [watchdog/3]
   14 ?        S<     0:04 [events/0]
   15 ?        S<     0:05 [events/1]
   16 ?        S<     0:05 [events/2]
   17 ?        S<     0:10 [events/3]
   18 ?        S<     0:00 [khelper]
   19 ?        S<     0:00 [kthread]
   21 ?        S<     0:00  \_ [xenwatch]
   22 ?        S<     0:00  \_ [xenbus]
   27 ?        S<     0:00  \_ [kblockd/0]
   28 ?        S<     0:00  \_ [kblockd/1]
   29 ?        S<     0:00  \_ [kblockd/2]
   30 ?        S<     0:00  \_ [kblockd/3]
   31 ?        S<     0:00  \_ [kacpid]
  165 ?        S<     0:00  \_ [cqueue/0]
  166 ?        S<     0:00  \_ [cqueue/1]
  167 ?        S<     0:00  \_ [cqueue/2]
  168 ?        S<     0:00  \_ [cqueue/3]
  172 ?        S<     0:00  \_ [khubd]
  174 ?        S<     0:00  \_ [kseriod]
  251 ?        S<    21:24  \_ [kswapd0]
  252 ?        S<     0:00  \_ [aio/0]
  253 ?        S<     0:00  \_ [aio/1]
  254 ?        S<     0:00  \_ [aio/2]
  255 ?        S<     0:00  \_ [aio/3]
  389 ?        S<     0:00  \_ [kpsmoused]
  445 ?        S<     0:00  \_ [kmirrord]
  456 ?        S<     0:00  \_ [ksnapd]
  457 ?        D<     4:54  \_ [kjournald]
  484 ?        S<     0:00  \_ [kauditd]
  743 ?        S<     0:00  \_ [scsi_eh_0]
  858 ?        S<     0:02  \_ [kedac]
 1401 ?        S<     0:00  \_ [qla2xxx_0_dpc]
 1402 ?        S<     0:00  \_ [scsi_wq_0]
 1403 ?        S<     0:00  \_ [fc_wq_0]
 1404 ?        S<     0:00  \_ [fc_dl_0]
 1405 ?        S<     0:00  \_ [scsi_eh_1]
 1414 ?        S<     0:00  \_ [qla2xxx_1_dpc]
 1415 ?        S<     0:00  \_ [scsi_wq_1]
 1416 ?        S<     0:00  \_ [fc_wq_1]
 1417 ?        S<     0:00  \_ [fc_dl_1]
 3005 ?        S<     0:00  \_ [kmpathd/0]
 3006 ?        S<     0:00  \_ [kmpathd/1]
 3007 ?        S<     0:00  \_ [kmpathd/2]
 3008 ?        S<     0:00  \_ [kmpathd/3]
 3327 ?        S<     0:00  \_ [kjournald]
13683 ?        S<     3:44  \_ [xvd 12 07:00]
21866 ?        S<     0:10  \_ [xvd 13 fd:59]
21867 ?        S<     0:00  \_ [xvd 13 fd:5a]
21868 ?        S<     0:57  \_ [xvd 13 fd:58]
18314 ?        S<     1:46  \_ [xvd 12 fd:76]
28672 ?        S<     0:00  \_ [user_dlm]
28682 ?        S<     0:32  \_ [o2net]
28900 ?        S<     0:04  \_ [o2hb-E299E3B2E8]
28924 ?        S<     0:00  \_ [ocfs2_wq]
28925 ?        S<     0:00  \_ [ocfs2vote-0]
28926 ?        S<     0:00  \_ [dlm_thread]
28927 ?        S<     0:00  \_ [dlm_reco_thread]
28928 ?        S<     0:00  \_ [dlm_wq]
28929 ?        S<     0:00  \_ [kjournald]
28930 ?        S<     0:00  \_ [ocfs2cmt-0]
31798 ?        S<     0:00  \_ [dlm_astd]
31799 ?        S<     0:00  \_ [dlm_scand]
31800 ?        S<     0:00  \_ [dlm_recvd]
31801 ?        S<     0:00  \_ [dlm_sendd]
31802 ?        S<     0:00  \_ [dlm_recoverd]
26600 ?        S<     0:07  \_ [xvd 17 fd:7b]
26601 ?        S<     0:17  \_ [xvd 17 fd:7f]
26886 ?        S<     0:00  \_ [xvd 17 fd:7c]
17408 ?        S      0:00  \_ [pdflush]
19041 ?        S      0:00  \_ [pdflush]
 4456 ?        S<sl   0:02 auditd
 4458 ?        S<s    0:02  \_ python /sbin/audispd
 4617 ?        Ds     0:26 syslogd -m 0
 4687 ?        Ss     0:00 klogd -x
 4753 ?        Ss     0:59 irqbalance
 4839 ?        Ss     0:00 portmap
 4897 ?        Ss     0:00 rpc.statd
 5050 ?        Ss     0:01 rpc.idmapd
 5090 ?        Ss     0:00 /usr/sbin/sshd
32483 ?        Ss     0:00  \_ sshd: root pts/3
32485 pts/3    Ss     0:00      \_ -bash
 1852 pts/3    R+     0:00          \_ ps fax
 5176 ?        Ssl   19:11 /sbin/ccsd
 5182 ?        SLl   12:13 aisexec
 5190 ?        Ss     0:00 /sbin/groupd
 5198 ?        Ss     0:00 /sbin/fenced
 5204 ?        Ss     0:00 /sbin/dlm_controld
 5210 ?        Ss     0:00 /sbin/gfs_controld
 5957 ?        Ss     0:00 dbus-daemon --system
 6539 ?        Ssl    1:15 pcscd
 6565 ?        Ssl    0:08 automount
 6647 ?        Ss     0:03 sendmail: accepting connections
 6655 ?        Ss     0:00 sendmail: Queue runner 01:00:00 for /var/spool/clientmqueue
 6670 ?        Ss     0:00 gpm -m /dev/input/mice -t exps2
 6713 ?        Ss     0:00 xfs -droppriv -daemon
 6742 ?        Ss     0:00 /usr/sbin/atd
 7015 ?        S    1119:33 xenstored --pid-file /var/run/xenstore.pid
 7020 ?        S      0:00 python /usr/sbin/xend start
 7021 ?        SLl  393:12  \_ python /usr/sbin/xend start
 7023 ?        Sl     0:00 xenconsoled
 7025 ?        Ssl    0:04 blktapctrl
 7072 ?        S<sl 549:17 modclusterd
 7167 ?        Ss     0:00 /usr/sbin/oddjobd -p /var/run/oddjobd.pid -t 300
 7218 ?        Ss     0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam
 7219 ?        S      0:00  \_ /usr/sbin/saslauthd -m /var/run/saslauthd -a pam
 7220 ?        S      0:00  \_ /usr/sbin/saslauthd -m /var/run/saslauthd -a pam
 7221 ?        S      0:00  \_ /usr/sbin/saslauthd -m /var/run/saslauthd -a pam
 7222 ?        S      0:00  \_ /usr/sbin/saslauthd -m /var/run/saslauthd -a pam
 7230 ?        S<s    0:06 ricci -u 100
 7234 tty1     Ss+    0:00 /sbin/mingetty tty1
 7235 tty2     Ss+    0:00 /sbin/mingetty tty2
 7236 tty3     Ss+    0:00 /sbin/mingetty tty3
 7239 tty4     Ss+    0:00 /sbin/mingetty tty4
 7244 tty5     Ss+    0:00 /sbin/mingetty tty5
 7270 tty6     Ss+    0:00 /sbin/mingetty tty6
 7271 ?        Ss     0:00 /bin/sh /usr/local/bin/svscanboot
 7307 ?        S      0:03  \_ svscan /service
 7309 ?        S      0:00      \_ supervise nrpe
 7317 ?        S      0:09      |   \_ tcpserver -l0 -v -H -R -x /etc/tcprules.d/nrpe.cdb -u nagios -g nagios 0 5666 /usr/local/bin/setuidgid nagios /etc/nrp
 7311 ?        S      0:00      \_ supervise log
 7314 ?        S      0:00      |   \_ cat -
20735 ?        S      0:00      \_ supervise log
20736 ?        S      0:00      |   \_ cat -
29308 ?        S      0:00      \_ supervise xvm
 8629 ?        S      0:00      |   \_ /bin/sh ./run
 8630 ?        S    740:18      |       \_ /usr/bin/perl /usr/sbin/xvmd --auto-start -D ou=XenHosts,dc=dc,dc=test,dc=pt -w testing
22320 ?        S      0:00      \_ supervise xmlpulse
22322 ?        S      0:00      |   \_ tcpserver -l0 -v -H -R -x /etc/tcprules.d/nrpe.cdb 0 5667 /usr/local/sbin/xm-xmlpulse
22321 ?        S      0:00      \_ supervise log
22323 ?        S      0:00          \_ cat -
11018 ?        Ss     0:02 crond
24722 ?        S<     0:00 [krfcommd]
 4684 ?        S<s    0:00 /sbin/udevd -d
13602 ?        S<    25:45 [loop0]
19354 ?        SLl   46:35 /sbin/multipathd
31797 ?        Ssl    0:00 clvmd -T40
24499 ?        SLs    0:01 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
27117 ?        S      0:03 /usr/sbin/snmpd -Lsd -Lf /dev/null -p /var/run/snmpd -a
  492 pts/3    S      0:00 vgdisplay -C


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]