[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [dm-devel] [RFC, PATCH] remove signal handling from dm-io.c, sync_io()

On Thursday 10 June 2004 9:53 pm, Dave Olien wrote:
> This patch removes the signal handling case from the sync_io() routine
> in dm-io.c.  This seems appropriate for several reasons.
> This first reason is that the current signal handling case could lead
> to corruption of the task's stack, as explained below:

Upon further review of dm-io.c::sync_io(), I have a couple more comments.

First, I'm not certain whether the signal handling needs to be there or not. 
Joe, perhaps you can comment on that. If we don't need it, then we can use 
Dave's suggestion and just get rid of the signal_pending() call. If we do 
need it, then instead of declaring a struct io on the stack in sync_io(), we 
need to allocate one (async_io() already does this using a mempool) and use 
dec_count() to deallocate it. This ought to prevent the stack corruption Dave 
described. I'll put together a patch in a bit to demonstrate how this would 

Second, when I suggested using wait_event(), I mentioned how sync_io() 
currently uses io_schedule(), whereas wait_event() uses regular schedule(). 
io_schedule() is basically just a wrapper around schedule() that does some 
extra accounting to indicate the process is waiting on I/O. Other than that, 
the two routines are functionally equivalent. So the question is: how much 
complexity do we want to add to sync_io() just to allow for this process 
accounting? Using wait_event() (or wait_event_interruptible() in the case we 
still need the signal handling) would certainly be simpler. I'll forward this 
question to some of the I/O-performance guys here and get their opinion.

Kevin Corry
kevcorry us ibm com

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]