[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [dm-devel] howto sent scsi failover commands from dm



Le jeu 12/08/2004 à 23:56, Patrick Mansfield a écrit :
> On Thu, Aug 12, 2004 at 11:02:51AM -0700, Mike Christie wrote:
> > around through remapper drivers like we do with bios, or should we avoid 
> > the mess and goto userspace (with userspace there would have to be a way 
> > to properly traverse mappers too though)?
> 
> Are there issues with issuing an SG_IO from user space when we are out of
> memory? 
> 
> That is we have:
> 
> sg_io -> blk_rq_map_user -> if aligned bio_map_user, else bio_copy_user
> 
> Both bio_*_user do a bio_alloc GFP_KERNEL, and bio_copy_user also has an
> alloc_page GFP_KERNEL.
> 
> So when trying to reclaim memory or such, if we are writing to a
> multipathed device, and it fails and requires a failover command to be
> sent, the user space SG_IO kernel allocation could sleep, and the system
> would hang.
> 
I naively thought kernel memory slabs where for emergency situations
like this. Is there a way for a privileged user process to hint the
kernel its allocations shouldn't fail ? Lars mentioned a PF_MEMALLOC
thing, but can't figure how the notion fits ...

Anyway, no need to search problems that far : we already have this hang
scenario with "multipathed-swap-outage-during-OOM". multipathd should
wake upon the outage and reconfigure the map, needing gobs of allocs.
Callouts dependancies like multipath & scsi_id don't really help here :)

I already have coded some limiting tricks :

1) copy callouts in a daemon-private ramfs and execv from here
2) set sched_prio to 99
3) mlockall the daemon and its threads

I'll code a mempool too.
But that's about all I can think of.
Need help for more.

> Could we somehow avoid those?
> 
> We might still have issues with requiring a user process in memory for
> each path that might need to failover (just that there might be a good
> number of them with a large number of disks), but that does not seem bad.
> 
one process per LUN at most, AFAICS.

regards,
cvaroqui



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]