The multipath utility is relying on having at least one blockIn fact, the multipath tool shares the same checker with the daemon.
read/write I/O be serviced through a multipath mapped
device in order to show one of the path priority groups in
an active state. While I can see the semantic correctness
in this claim since the priority group is not yet initialized,
is this what is intended?
Why show both the single priorityWe don't have so many choices there. The device mapper declares 3 PG states : active, enabled, disabled.
group of an active-active storage system using a multibus
path grouping policy and the non-active priority group of an
active-passive storage system using a priority path grouping
policy both as "enabled" when the actual readiness of each
differs quite significantly?
Maybe I'm overseeing something, but to my knowledge "multipath -l" gets the paths status from devinfo.c, which in turn switches to pp->checkfn() ... ie the same checker the daemon uses.Also, multipath will not set a path to a failed state until the first block read/write I/O to that path fails. This approach can be misleading while monitoring path health via "multipath -l". Why not have multipath(8) fail paths known to fail path testing? Waiting instead for block I/O requests to fail lessens the responsiveness of the product to path failures. Also, the failed paths of enabled, but non-active path priority groups will not have their path state updated for possibly a very long time -- and this seems very misleading.
--
dm-devel mailing list
dm-devel redhat com
https://www.redhat.com/mailman/listinfo/dm-devel