[Ovirt-devel] Re: Thoughts about taskomatic redesign

Ian Main imain at redhat.com
Fri Jun 27 17:10:24 UTC 2008


On Fri, 27 Jun 2008 09:23:18 +0100
"Daniel P. Berrange" <berrange at redhat.com> wrote:

[snip]

> I think this is probably overkill. If there aren't many tasks being
> added then the datbase won't have much load and so polling won't be
> a huge issue. If there are lots of tasks then you'll get pulling new
> tasks off the queue pretty often and you'll need a scalable database
> already. David's idea of having the process which queue's a tasks
> issue some form of notification could be added as an optimization
> later, but I'd just go with polling to start off with.
> 
> BTW, there is a built-in notification mechanims in postgresql if you
> don't mind using PG specific syntax
> 
> The taskomatic would do
> 
>      LISTEN newtasks
> 
> And the WUI would do
> 
>      NOTIFY newtasks
>
> whenever it queued a new task. If you have polling as the built-in
> default mode of operation, then you could just have this PG specific
> bit as an optional optimization.
> 
> http://www.postgresql.org/docs/8.3/interactive/sql-notify.html

Wooo, spiffy.
 
> > I think a single ruby process could be used to order the tasks and
> > place them in per-thread/process queues.  If using a DB I think we
> > could either migrate the entries to a new 'in-progress' table, or 
> > update the row with the ID of the process/thread and possibly the 
> > sequence number to be used in implementing the queue.  
> 
> This is overkill for the problem - a simple status field in the
> tasks table can take care of te tracking what's in progress. Completed
> tasks can be purged periodically to stop it growing without bound, or
> sent to an archive_tasks table if we really need to keep the data
> around long term.

I think you are right.  We should insert dependency info in the table as well.

> We want to have  'n' x (number of logical CPU cores)   worker processes
> for a value of 'n' yet to be determined - if they're mostly waiting
> on I/O, then 'n' can be pretty large. We'll have to just try it out and
> see what a good number is.

I'm pretty sure it's all IO bound stuff.  I think the queues will have to be implemented serially so most of time you're waiting around for the node to complete it's business, so basing it on CPU cores is probably not really relevant.  Probably expanding dynamically based on queue size would work better I would think.

   Ian


 
> Regards,
> Daniel
> -- 
> |: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
> |: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
> |: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
> |: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|




More information about the ovirt-devel mailing list