[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [Pulp-list] Asynchronous Task Dispatching

On 04/14/2011 04:07 PM, Mike McCune wrote:
On 04/14/2011 10:13 AM, Jason L Connor wrote:
On Thu, 2011-04-14 at 11:50 -0400, Jay Dobies wrote:
Why do we need to query the database for tasks? Can we keep the task
stuff in memory and snap out its state when it changes? Or are you
trying to solve the cross-process task question at the same time?

There's a few of reasons:
1. I want to get the task persistence stuff working, we can look at
what optimizations we need once it does
2. I'm currently trying to keep the multi-process deployment option
open and volatile memory storage is not conducive to that
3. I'm trying not to introduce any task state consistency bugs, at
least, not initially
4. To be honest, dequeueing tasks, running tasks, timing out tasks,
and canceling tasks (i.e. what the dispatcher does), all
represent state changes and most would have to hit the db anyway

I'm think that once the persistent stuff actually works I can revisit it
looking for optimizations and features needed to support multi-process
access (if we decided to go that route).

In the meantime, I was thinking about a 30 second delay between task
queue checks. With an on demand dispatcher wake up whenever a new task
is enqueued. This should keep our async sub-system fairly responsive in
terms of repo syncs and the like while keep db io down to something

so, does this mean when I initiate a new sync I may have to wait up to
30 seconds to start seeing things happen? If that is the case I'd have
to give a thumbs down. UIs relying on Pulp to sync content waiting 30
seconds for any kind of updates seems pretty sluggish.


Reading initially for a time other than twice a second (0.5 sec) my brain said:

15 seconds

I feel this is reasonable quick/short amount of wait time for something to kickin.


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]