[dm-devel] [PATCH v2] dm mpath: maintain reference count for underlying devices

Mike Snitzer snitzer at redhat.com
Mon Oct 17 20:15:14 UTC 2011


On Tue, Sep 20 2011 at  1:29am -0400,
Jun'ichi Nomura <j-nomura at ce.jp.nec.com> wrote:

> Hi Mike,
> 
> On 09/19/11 23:34, Mike Snitzer wrote:
> > On Mon, Sep 19 2011 at  2:49am -0400,
> > Jun'ichi Nomura <j-nomura at ce.jp.nec.com> wrote:
> >> DM opens underlying devices and it should be sufficient to keep
> >> request_queue from being freed.
> > 
> > I welcome your review but please be more specific in the future.
> > 
> > Sure DM opens the underlying devices:
> > 
> > dm_get_device()
> >   -> open_dev()
> >      -> blkdev_get_by_dev()
> >      	-> bdget()
> > 	-> blkdev_get()
> > 
> > But DM only gets a reference on the associated block_device.
> 
> Point is the above should be sufficient to keep the queue from freeing.
> Otherwise, 'q->_something_' everywhere could cause invalid pointer access
> as the queue is freed.
> 
> 
> Below are additional details replying to your comments:
> 
> > 
> > DM multipath makes use of the request_queue of each paths'
> > block_device.  Having a reference on the block_device isn't the same as
> > having a reference on the request_queue.
> 
> Yes. But it does not necessarily mean we have to raise
> a reference count of the request_queue.
> 
> > 
> > Point is, blk_cleanup_queue() could easily be called by the SCSI
> > subsystem for a device that is removed -- a request_queue reference is
> > taken by the underlying driver at blk_alloc_queue_node() time.  So SCSI
> > is free to drop the only reference in blk_cleanup_queue() which frees
> > the request_queue (unless upper layer driver like mpath also takes a
> > request_queue reference).
> 
> As for SCSI, it takes another reference count and drops it
> in scsi_device_dev_release.
> So blk_cleanup_queue is not dropping the last reference.

FYI, this patch from Tejun should also fix the concern I had relative to
mpath's underlying devices' request_queues:
https://lkml.org/lkml/2011/10/16/148




More information about the dm-devel mailing list