[dm-devel] Re: [PATCH 1/7] scsi_dh: Implement common device table handling
Hannes Reinecke
hare at suse.de
Fri May 23 11:14:56 UTC 2008
Hi Chandra,
On Thu, May 22, 2008 at 07:08:13PM -0700, Chandra Seetharaman wrote:
> Hi Hannes,
>
> One of my comments (regarding creating a cache of vendor-product-scsi_dh
> trio of already found device type). I will code it and send it.
>
> Looks good except for the minor comments as noted below.
>
> On Tue, 2008-05-20 at 16:05 +0200, Hannes Reinecke wrote:
>
> <snip>
>
> > + spin_lock(&list_lock);
> > + list_for_each_entry(tmp, &scsi_dh_list, list) {
> > + for(i = 0; tmp->devlist[i].vendor; i++) {
> > + if (!strncmp(sdev->vendor, tmp->devlist[i].vendor,
> > + strlen(tmp->devlist[i].vendor)) &&
> > + !strncmp(sdev->model, tmp->devlist[i].model,
> > + strlen(tmp->devlist[i].model))) {
> > + devinfo = tmp;
> > + break;
> > + }
> > + }
> > + if (devinfo)
> > + break;
> > + }
> > + spin_unlock(&list_lock);
>
> The above block of code can be made into a function (as there are more
> than one usage).
>
Ok, will do.
> > +
> > + if (!devinfo)
> > + goto out;
> > +
> > + if (action == BUS_NOTIFY_ADD_DEVICE) {
> > + err = scsi_dh_handler_attach(sdev, devinfo);
> > + } else if (action == BUS_NOTIFY_DEL_DEVICE) {
> > + if (sdev->scsi_dh_data == NULL)
> > + goto out;
>
> Check above is done in scsi_dh_handler_detach. Do you think it is needed
> here ?
>
No, not really. Just forgot to take it out.
[ .. ]
> > +{
> > + struct scsi_device_handler *scsi_dh = data;
> > + struct scsi_device *sdev;
> > +
> > + if (!scsi_is_sdev_device(dev))
> > + return 0;
> > +
> > + sdev = to_scsi_device(dev);
> > +
> > + if (!sdev->scsi_dh_data || sdev->scsi_dh_data->scsi_dh != scsi_dh)
> > + return 0;
> > +
> > + scsi_dh_handler_detach(sdev);
> >
> > - scsi_dh->nb.notifier_call(&scsi_dh->nb, BUS_NOTIFY_ADD_DEVICE, dev);
> > return 0;
> > }
> notifier_add above does get_device() and put_device(). We don't need it
> in notifier_remove ?
Oops. Yes, we do.
[ .. ]
> > - strlen(clariion_dev_list[i].model))) {
> > - found = 1;
> > - break;
> > - }
> > - }
> > - if (!found)
> > - goto out;
> > -
> > - scsi_dh_data = kzalloc(sizeof(struct scsi_device_handler *)
> > - + sizeof(*h) , GFP_KERNEL);
> > - if (!scsi_dh_data) {
> > - sdev_printk(KERN_ERR, sdev, "Attach failed %s.\n",
> > - CLARIION_NAME);
> > - goto out;
> > - }
> > + if (sdev->scsi_dh_data)
> > + return -EBUSY;
>
> This check has already been done at scsi_dh level. Do we still need it
> here ?
>
No, we don't.
> >
> > - sdev_printk(KERN_NOTICE, sdev, "Dettached %s.\n",
> > - CLARIION_NAME);
> > +static void clariion_bus_detach(struct scsi_device *sdev)
> > +{
> > + struct scsi_dh_data *scsi_dh_data;
> > + unsigned long flags;
> >
> > - kfree(scsi_dh_data);
> > - module_put(THIS_MODULE);
> > - }
> > + if (sdev->scsi_dh_data == NULL ||
> > + sdev->scsi_dh_data->scsi_dh != &clariion_dh)
> > + return;
>
> first check is already done in scsi_dh level.
>
Fixed.
[ .. ]
> > +static void hp_sw_bus_detach( struct scsi_device *sdev )
> > +{
> > + struct scsi_dh_data *scsi_dh_data;
> > + unsigned long flags;
> >
> > - spin_lock_irqsave(sdev->request_queue->queue_lock, flags);
> > - scsi_dh_data = sdev->scsi_dh_data;
> > - sdev->scsi_dh_data = NULL;
> > - spin_unlock_irqrestore(sdev->request_queue->queue_lock, flags);
> > - module_put(THIS_MODULE);
> > + if (sdev->scsi_dh_data == NULL ||
> > + sdev->scsi_dh_data->scsi_dh != &hp_sw_dh)
> > + return;
> first check is already done in scsi_dh level.
>
Fixed.
Seeing that this will become quite a lengthy process I've put up my scsi_dh
patchset at
git://git.kernel.org/pub/scm/linux/kernel/git/hare/scsi-ml.git
branch: scsi_dh
We should be using this as a starting point and continue this discussion
(and the patches) offline.
Once we've agreed on a status we should be sending the patchset to the
mailinglist. Otherwise we'll be clogging up bandwidth unnecessarily.
Cheers,
Hannes
--
Dr. Hannes Reinecke zSeries & Storage
hare at suse.de +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Markus Rex, HRB 16746 (AG Nürnberg)
More information about the dm-devel
mailing list