[Ovirt-devel] Re: [PATCH server] Taskomatic Refactoring and Qpidification.

Ian Main imain at redhat.com
Thu Dec 18 23:52:36 UTC 2008



Doh, I screwed that up, sorry.. reposting.

	Ian


On Thu, 18 Dec 2008 14:42:08 -0800
Ian Main <imain at redhat.com> wrote:

> This patch reworks taskomatic quite a bit.  This mostly just shifts
> taskomatic to using the qpid interface in place of ruby-libvirt.  It
> also fixes a few bugs I discovered a long the way and adds new ones
> I'm sure.  The only other thing added was round-robin host selection
> for VMs.
> 
> Wherevery possible the hosts are queried directly using qpid rather than
> relying on states etc. from the database.
> 
> This patch loses about 150 lines from the original taskomatic and moves
> most of the task implementation into a central class.  This was done to
> provide access to the qpid session as well as providing for locking/task
> ordering in future versions.
> 
> This requires the latest libvirt-qpid (0.2.7) as it fixes a number of
> bugs.  It's in the ovirt repository now.
> 
> Issues remaining:
> 
> - libvirt-qpid migrate is broken.  Since the migrate takes place on the
>   node instead of from the ovirt-appliance, the source node doesn't have
>   the ability to authenticate against the destination node.  For this
>   reason I'm still using ruby-libvirt migrate.  I talked to Chris about
>   this and we have a plan worked out. :)
> 
> - I wanted to get threading into this but that will have to wait.  I'll
>   post a thread about this to get the discussion started again.  I think
>   the refactoring allows this to be put in pretty easily.
> 
> Signed-off-by: Ian Main <imain at redhat.com>
> ---
>  src/task-omatic/task_host.rb |   33 ------
>  src/task-omatic/utils.rb     |  221 ------------------------------------------
>  2 files changed, 0 insertions(+), 254 deletions(-)
>  delete mode 100644 src/task-omatic/task_host.rb
>  delete mode 100644 src/task-omatic/utils.rb
> 
> diff --git a/src/task-omatic/task_host.rb b/src/task-omatic/task_host.rb
> deleted file mode 100644
> index 3d039fb..0000000
> --- a/src/task-omatic/task_host.rb
> +++ /dev/null
> @@ -1,33 +0,0 @@
> -# Copyright (C) 2008 Red Hat, Inc.
> -# Written by Chris Lalancette <clalance at redhat.com>
> -#
> -# This program is free software; you can redistribute it and/or modify
> -# it under the terms of the GNU General Public License as published by
> -# the Free Software Foundation; version 2 of the License.
> -#
> -# This program is distributed in the hope that it will be useful,
> -# but WITHOUT ANY WARRANTY; without even the implied warranty of
> -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> -# GNU General Public License for more details.
> -#
> -# You should have received a copy of the GNU General Public License
> -# along with this program; if not, write to the Free Software
> -# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
> -# MA  02110-1301, USA.  A copy of the GNU General Public License is
> -# also available at http://www.gnu.org/copyleft/gpl.html.
> -
> -require 'utils'
> -
> -# FIXME: a little ugly to be including all of task_vm here, but
> -# utils really isn't the right place for the migrate() method
> -require 'task_vm'
> -
> -def clear_vms_host(task)
> -  puts "clear_vms_host"
> -
> -  src_host = task.host
> -
> -  src_host.vms.each do |vm|
> -    migrate(vm)
> -  end
> -end
> diff --git a/src/task-omatic/utils.rb b/src/task-omatic/utils.rb
> deleted file mode 100644
> index e3005ed..0000000
> --- a/src/task-omatic/utils.rb
> +++ /dev/null
> @@ -1,221 +0,0 @@
> -require 'rexml/document'
> -include REXML
> -
> -def String.random_alphanumeric(size=16)
> -  s = ""
> -  size.times { s << (i = Kernel.rand(62); i += ((i < 10) ? 48 : ((i < 36) ? 55 : 61 ))).chr }
> -  s
> -end
> -
> -def all_storage_pools(conn)
> -  all_pools = conn.list_defined_storage_pools
> -  all_pools.concat(conn.list_storage_pools)
> -  return all_pools
> -end
> -
> -def get_libvirt_lvm_pool_from_volume(db_volume)
> -  phys_volume = StorageVolume.find(:first, :conditions =>
> -                                   [ "lvm_pool_id = ?", db_volume.storage_pool_id])
> -
> -  return LibvirtPool.factory(phys_volume.storage_pool)
> -end
> -
> -class LibvirtPool
> -  def initialize(type, name = nil)
> -    @remote_pool = nil
> -    @build_on_start = true
> -    @remote_pool_defined = false
> -    @remote_pool_started = false
> -
> -    if name == nil
> -      @name = type + "-" + String.random_alphanumeric
> -    else
> -      @name = name
> -    end
> -
> -    @xml = Document.new
> -    @xml.add_element("pool", {"type" => type})
> -
> -    @xml.root.add_element("name").add_text(@name)
> -
> -    @xml.root.add_element("source")
> -
> -    @xml.root.add_element("target")
> -    @xml.root.elements["target"].add_element("path")
> -  end
> -
> -  def connect(conn)
> -    all_storage_pools(conn).each do |remote_pool_name|
> -      tmppool = conn.lookup_storage_pool_by_name(remote_pool_name)
> -
> -      if self.xmlequal?(Document.new(tmppool.xml_desc).root)
> -        @remote_pool = tmppool
> -        break
> -      end
> -    end
> -
> -    if @remote_pool == nil
> -      @remote_pool = conn.define_storage_pool_xml(@xml.to_s)
> -      # we need this because we don't necessarily want to "build" LVM pools,
> -      # which might destroy existing data
> -      if @build_on_start
> -        @remote_pool.build
> -      end
> -      @remote_pool_defined = true
> -    end
> -
> -    if @remote_pool.info.state == Libvirt::StoragePool::INACTIVE
> -      # only try to start the pool if it is currently inactive; in all other
> -      # states, assume it is already running
> -      @remote_pool.create
> -      @remote_pool_started = true
> -    end
> -  end
> -
> -  def list_volumes
> -    return @remote_pool.list_volumes
> -  end
> -
> -  def lookup_vol_by_path(dev)
> -    return @remote_pool.lookup_volume_by_path(dev)
> -  end
> -
> -  def lookup_vol_by_name(name)
> -    return @remote_pool.lookup_volume_by_name(name)
> -  end
> -
> -  def create_vol(type, name, size, owner, group, mode)
> -    @vol_xml = Document.new
> -    @vol_xml.add_element("volume", {"type" => type})
> -    @vol_xml.root.add_element("name").add_text(name)
> -    @vol_xml.root.add_element("capacity", {"unit" => "K"}).add_text(size.to_s)
> -    @vol_xml.root.add_element("target")
> -    @vol_xml.root.elements["target"].add_element("permissions")
> -    @vol_xml.root.elements["target"].elements["permissions"].add_element("owner").add_text(owner)
> -    @vol_xml.root.elements["target"].elements["permissions"].add_element("group").add_text(group)
> -    @vol_xml.root.elements["target"].elements["permissions"].add_element("mode").add_text(mode)
> -  end
> -
> -  def shutdown
> -    if @remote_pool_started
> -      @remote_pool.destroy
> -    end
> -    if @remote_pool_defined
> -      @remote_pool.undefine
> -    end
> -  end
> -
> -  def xmlequal?(docroot)
> -    return false
> -  end
> -
> -  def self.factory(pool)
> -    if pool[:type] == "IscsiStoragePool"
> -      return IscsiLibvirtPool.new(pool.ip_addr, pool[:target])
> -    elsif pool[:type] == "NfsStoragePool"
> -      return NFSLibvirtPool.new(pool.ip_addr, pool.export_path)
> -    elsif pool[:type] == "LvmStoragePool"
> -      # OK, if this is LVM storage, there are two cases we need to care about:
> -      # 1) this is a LUN with LVM already on it.  In this case, all we need to
> -      #    do is to create a new LV (== libvirt volume), and be done with it
> -      # 2) this LUN is blank, so there is no LVM on it already.  In this
> -      #    case, we need to pvcreate, vgcreate first (== libvirt pool build),
> -      #    and *then* create the new LV (== libvirt volume) on top of that.
> -      #
> -      # We can tell the difference between an LVM Pool that exists and one
> -      # that needs to be created based on the value of the pool.state;
> -      # if it is PENDING_SETUP, we need to create it first
> -      phys_volume = StorageVolume.find(:first, :conditions =>
> -                                       [ "lvm_pool_id = ?", pool.id])
> -
> -      return LVMLibvirtPool.new(pool.vg_name, phys_volume.path,
> -                                pool.state == StoragePool::STATE_PENDING_SETUP)
> -    else
> -      raise "Unknown storage pool type " + pool[:type].to_s
> -    end
> -  end
> -end
> -
> -class IscsiLibvirtPool < LibvirtPool
> -  def initialize(ip_addr, target)
> -    super('iscsi')
> -
> -    @type = 'iscsi'
> -    @ipaddr = ip_addr
> -    @target = target
> -
> -    @xml.root.elements["source"].add_element("host", {"name" => @ipaddr})
> -    @xml.root.elements["source"].add_element("device", {"path" => @target})
> -
> -    @xml.root.elements["target"].elements["path"].text = "/dev/disk/by-id"
> -  end
> -
> -  def xmlequal?(docroot)
> -    return (docroot.attributes['type'] == @type and
> -            docroot.elements['source'].elements['host'].attributes['name'] == @ipaddr and
> -            docroot.elements['source'].elements['device'].attributes['path'] == @target)
> -  end
> -end
> -
> -class NFSLibvirtPool < LibvirtPool
> -  def initialize(ip_addr, export_path)
> -    super('netfs')
> -
> -    @type = 'netfs'
> -    @host = ip_addr
> -    @remote_path = export_path
> -    @name = String.random_alphanumeric
> -
> -    @xml.root.elements["source"].add_element("host", {"name" => @host})
> -    @xml.root.elements["source"].add_element("dir", {"path" => @remote_path})
> -    @xml.root.elements["source"].add_element("format", {"type" => "nfs"})
> -
> -    @xml.root.elements["target"].elements["path"].text = "/mnt/" + @name
> -  end
> -
> -  def create_vol(name, size, owner, group, mode)
> -    # FIXME: this can actually take some time to complete (since we aren't
> -    # doing sparse allocations at the moment).  During that time, whichever
> -    # libvirtd we chose to use is completely hung up.  The solution is 3-fold:
> -    # 1.  Allow sparse allocations in the WUI front-end
> -    # 2.  Make libvirtd multi-threaded
> -    # 3.  Make taskomatic multi-threaded
> -    super("netfs", name, size, owner, group, mode)
> -
> -    # FIXME: we have to add the format as raw here because of a bug in libvirt;
> -    # if you specify a volume with no format, it will crash libvirtd
> -    @vol_xml.root.elements["target"].add_element("format", {"type" => "raw"})
> -    @remote_pool.create_vol_xml(@vol_xml.to_s)
> -  end
> -
> -  def xmlequal?(docroot)
> -    return (docroot.attributes['type'] == @type and
> -            docroot.elements['source'].elements['host'].attributes['name'] == @host and
> -            docroot.elements['source'].elements['dir'].attributes['path'] == @remote_path)
> -  end
> -end
> -
> -class LVMLibvirtPool < LibvirtPool
> -  def initialize(vg_name, device, build_on_start)
> -    super('logical', vg_name)
> -
> -    @type = 'logical'
> -    @build_on_start = build_on_start
> -
> -    @xml.root.elements["source"].add_element("name").add_text(@name)
> -    @xml.root.elements["source"].add_element("device", {"path" => device})
> -
> -    @xml.root.elements["target"].elements["path"].text = "/dev/" + @name
> -  end
> -
> -  def create_vol(name, size, owner, group, mode)
> -    super("logical", name, size, owner, group, mode)
> -    @remote_pool.create_vol_xml(@vol_xml.to_s)
> -  end
> -
> -  def xmlequal?(docroot)
> -    return (docroot.attributes['type'] == @type and
> -            docroot.elements['name'].text == @name and
> -            docroot.elements['source'].elements['name'] == @name)
> -  end
> -end
> -- 
> 1.6.0.4
> 




More information about the ovirt-devel mailing list