[dm-devel] multipath - AAArgh! How do I turn "features=1 queue_if_no_path" off?

Hannes Reinecke hare at suse.de
Thu Oct 1 09:44:24 UTC 2009


On Thu, Oct 01, 2009 at 10:55:12AM +0200, John Hughes wrote:
> Hannes Reinecke wrote:
>> malahal at us.ibm.com wrote:
>>> John Hughes [john at Calva.COM] wrote:
>>>> I want to turn queue_if_no_path off and use
>>>>
>>>>                polling_interval        5
>>>>                no_path_retry           5
>>>>
>>>> because I've had problems with things hanging when a lun "vanishes" (I 
>>>> deleted it from my external raid box).
>>>>
>>>> But whatever I put in /etc/multipath.conf when I do a "multipath -l" or 
>>>> "multipath-ll" it shows:
>>>> 360024e80005b3add000001b64ab05c87dm-28 DELL    ,MD3000        
>>>> [size=68G][features=1 queue_if_no_path][hwhandler=1 rdac]
>>>> \_ round-robin 0 [prio=3][active]
>>>> \_ 3:0:1:13 sdad 65:208 [active][ready]
>>>> \_ round-robin 0 [prio=0][enabled]
>>>> \_ 4:0:0:13 sdas 66:192 [active][ghost]
>>>>
>> Which is entirely correct. The 'queue_if_no_path' flag _has_ to
>> be set here as we do want to retry failed paths, if only for
>> a limited amount of retries.
>>
>> The in-kernel dm-multipath module should handle the situation correctly
>> and switch off the queue_if_no_path flag (= pass I/O errors upwards)
>> when the amount of retries is exhausted.
> As far as I can tell it retries forever (even with polling_interval 5 and 
> no_path_retry 5).   The mdadm raid10 built on top of the multipath devices 
> hangs, even /proc/mdstat hangs.
>
> You're saying that without queue_if_no_path multipath basicly won't work - 
> mdadm will see I/O errors on multipath devices if a path fails?
>
If _all_ paths fail. Note the 'no_path' bit :-)

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		      zSeries & Storage
hare at suse.de			      +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Markus Rex, HRB 16746 (AG Nürnberg)




More information about the dm-devel mailing list