The recent change to multipath/main.c:main() to include an inner loop which "calls both devt2devname() and devinfo() for each sysfs discovered path" inside an outer loop which runs for each currently configured multipath mapped device in the kernel has turned multipath(8) into both a cpu and memory monster. top(1) shows that a simple "multipath -v0 -l" command has been running for over 20 minutes of "cpu time" over a 25 minute time span and it has not yet finished! This is only after fixing a bug in devt2devname() where the sysfs directory wasn't being closed before returning after matching the specified dev_t with a sysfs entry. Simply adding call to sysfs_close_directory() before early prevented huge memory consumption from keeping anything else from happening on the machine. I found another much smaller memory leak in multipath/devinfo.c:apply_format() and the calls to apply_format() from multipath/devinfo.c:devinfo(). I've enclosed a patch for devinfo.c with both of these leaks fixed. <<devinfo.c.patch>> It looks like this change is to allow multipath to retrieve "very recent" path health before comparing this path health status with the path status from the kernel dm map. This comes at quite a cost though. My host has 128 LUs each with 2 paths and 32 LUs each with 4 paths for an aggregate total of 160 LUs and 320 paths. Each of the calls to devinfo() does 2 fork/exec pairs, leading to a total of 160 * 320 * 2 = 102,400 fork/exec pairs before the command completes. The numbers will look astronomically large for a host with 4K LUs each with 4 paths. I suspect that the fork/exec overhead is the major cause of the time delay since time(1) for the command invocation shows slightly more system time than user time. Conditionally compiling out the inner loop enables the "multipath -v0 -l" command to complete in 2.3 seconds.
Description: Binary data