[dm-devel] path priority group and path state

goggin, edward egoggin at emc.com
Thu Feb 10 16:48:03 UTC 2005


The multipath utility is relying on having at least one block
read/write I/O be serviced through a multipath mapped
device in order to show one of the path priority groups in
an active state.  While I can see the semantic correctness
in this claim since the priority group is not yet initialized,
is this what is intended?  Why show both the single priority
group of an active-active storage system using a multibus
path grouping policy and the non-active priority group of an
active-passive storage system using a priority path grouping
policy both as "enabled" when the actual readiness of each
differs quite significantly?

Also, multipath will not set a path to a failed state until the
first block read/write I/O to that path fails.  This approach
can be misleading while monitoring path health via
"multipath -l".  Why not have multipath(8) fail paths known to
fail path testing?  Waiting instead for block I/O requests to
fail lessens the responsiveness of the product to path failures.
Also, the failed paths of enabled, but non-active path priority
groups will not have their path state updated for possibly a
very long time -- and this seems very misleading.






More information about the dm-devel mailing list