[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

[Ovirt-devel] [PATCH]: Add NFS file creation/deletion to taskomatic



All,
     Attached is a patch to implement NFS file creation/deletion in the
taskomatic back-end.  Actually, this patch does quite a bit more than that:

1)  Implement NFS file creation/deletion
2)  Fix LVM creation - due to the way I was testing, there was a glaring bug in
the implementation that caused it not to work at all on new, raw LUNs
3)  Make sure to flip the Pool and Volume states to "available", once things are
available.

With this patch in place, along with a minor patch to libvirt (updated packages
coming soon), I was able to successfully create and delete LVM volumes, NFS
files, and then use those newly created LUNs/files in a guest.

Caveat: note that NFS file creation can take a *long* time.  Basically, we fully
allocate the file at creation time, which means we have to write the entire disk
full of zeros.  During this time, both libvirtd (on the remote node) and
taskomatic can't accept any more tasks.  This will eventually be fixed by a)
making libvirtd multithreaded, and b) making taskomatic fully threaded.

Finally: with this in place, a couple of problems come to light.  One is that we
probably want to give users the ability to choose a sparse file vs. a non-sparse
file (but that's a minor change in the frontend and in taskomatic).  The second
is much more complicated and important; namely, that we need users to be able to
assign /dev/sda to iSCSI LUN 3, /dev/sdb to NFS file 2, etc.  At the *very*
least we need users to be able to specify which device is the root device.  In
any case, this is future work.

Signed-off-by: Chris Lalancette <clalance redhat com>
diff --git a/src/app/controllers/storage_controller.rb b/src/app/controllers/storage_controller.rb
index 6d171c9..2235b6a 100644
--- a/src/app/controllers/storage_controller.rb
+++ b/src/app/controllers/storage_controller.rb
@@ -143,12 +143,19 @@ class StorageController < ApplicationController
       unless lvm_pool
         # FIXME: what should we do about VG/LV names?
         # for now auto-create VG name as ovirt_vg_#{ source_volume id}
-        lvm_pool = LvmStoragePool.new(:vg_name => "ovirt_vg_#{ source_volume id}",
-              :hardware_pool_id => @source_volume.storage_pool.hardware_pool_id)
+        new_params = { :vg_name => "ovirt_vg_#{ source_volume id}",
+          :hardware_pool_id => @source_volume.storage_pool.hardware_pool_id}
+        lvm_pool = StoragePool.factory(StoragePool::LVM, new_params)
         lvm_pool.source_volumes << @source_volume
         lvm_pool.save!
       end
       new_volume_internal(lvm_pool, { :storage_pool_id => lvm_pool.id})
+
+      # now that we've created the new pool and volume, make sure to link that
+      # new LVM pool to the source volume
+      @source_volume.lvm_pool_id = lvm_pool.id
+      @source_volume.save!
+
       @storage_volume.lv_owner_perms='0744'
       @storage_volume.lv_group_perms='0744'
       @storage_volume.lv_mode_perms='0744'
diff --git a/src/app/models/lvm_storage_volume.rb b/src/app/models/lvm_storage_volume.rb
index 4aac265..8eb1f0e 100644
--- a/src/app/models/lvm_storage_volume.rb
+++ b/src/app/models/lvm_storage_volume.rb
@@ -25,4 +25,8 @@ class LvmStorageVolume < StorageVolume
   def volume_name
     "lv_name"
   end
+
+  def volume_create_params
+    return lv_name, size, lv_owner_perms, lv_group_perms, lv_mode_perms
+  end
 end
diff --git a/src/app/models/nfs_storage_volume.rb b/src/app/models/nfs_storage_volume.rb
index 2c18d67..61d5795 100644
--- a/src/app/models/nfs_storage_volume.rb
+++ b/src/app/models/nfs_storage_volume.rb
@@ -25,4 +25,8 @@ class NfsStorageVolume < StorageVolume
   def volume_name
     "filename"
   end
+
+  def volume_create_params
+    return filename, size, "0744", "0744", "0744"
+  end
 end
diff --git a/src/task-omatic/task_storage.rb b/src/task-omatic/task_storage.rb
index 7ef547b..d815a2a 100644
--- a/src/task-omatic/task_storage.rb
+++ b/src/task-omatic/task_storage.rb
@@ -20,20 +20,6 @@ require 'utils'
 
 require 'libvirt'
 
-def build_libvirt_vol_xml(name, size, owner, group, mode)
-  vol_xml = Document.new
-  vol_xml.add_element("volume", {"type" => "logical"})
-  vol_xml.root.add_element("name").add_text(name)
-  vol_xml.root.add_element("capacity", {"unit" => "K"}).add_text(size.to_s)
-  vol_xml.root.add_element("target")
-  vol_xml.root.elements["target"].add_element("permissions")
-  vol_xml.root.elements["target"].elements["permissions"].add_element("owner").add_text(owner)
-  vol_xml.root.elements["target"].elements["permissions"].add_element("group").add_text(group)
-  vol_xml.root.elements["target"].elements["permissions"].add_element("mode").add_text(mode)
-
-  return vol_xml
-end
-
 def add_volumes_to_db(db_pool, libvirt_pool, owner = nil, group = nil, mode = nil)
   # FIXME: this is currently broken if you do something like:
   # 1.  Add an iscsi pool with 3 volumes (lun-1, lun-2, lun-3)
@@ -71,6 +57,7 @@ def add_volumes_to_db(db_pool, libvirt_pool, owner = nil, group = nil, mode = ni
     storage_volume.lv_owner_perms = owner
     storage_volume.lv_group_perms = group
     storage_volume.lv_mode_perms = mode
+    storage_volume.state = StorageVolume::STATE_AVAILABLE
     storage_volume.save!
   end
 end
@@ -142,6 +129,9 @@ def refresh_pool(task)
       # OK, the pool is all set.  Add in all of the volumes
       add_volumes_to_db(phys_db_pool, phys_libvirt_pool)
 
+      phys_db_pool.state = StoragePool::STATE_AVAILABLE
+      phys_db_pool.save!
+
       # OK, now we've scanned the underlying hardware pool and added the
       # volumes.  Next we scan for pre-existing LVM volumes
       logical_xml = conn.discover_storage_pool_sources("logical")
@@ -211,45 +201,43 @@ end
 def create_volume(task)
   puts "create_volume"
 
-  lvm_db_volume = task.storage_volume
-  if lvm_db_volume == nil
+  db_volume = task.storage_volume
+  if db_volume == nil
     raise "Could not find storage volume to create"
   end
-  if lvm_db_volume[:type] != "LvmStorageVolume"
-    raise "The volume to create must be of type LvmStorageVolume, not type #{lvm_db_volume[:type]}"
-  end
 
-  lvm_db_pool = lvm_db_volume.storage_pool
-  if lvm_db_pool == nil
+  db_pool = db_volume.storage_pool
+  if db_pool == nil
     raise "Could not find storage pool"
   end
-  if lvm_db_pool[:type] != "LvmStoragePool"
-    raise "The pool for the volume must be of type LvmStoragePool, not type #{lvm_db_pool[:type]}"
-  end
 
-  conn = storage_find_suitable_host(lvm_db_pool.hardware_pool_id)
+  conn = storage_find_suitable_host(db_pool.hardware_pool_id)
 
   begin
-    phys_libvirt_pool = get_libvirt_pool_from_volume(lvm_db_volume)
-    phys_libvirt_pool.connect(conn)
+    if db_volume[:type] == "LvmStorageVolume"
+      phys_libvirt_pool = get_libvirt_lvm_pool_from_volume(db_volume)
+      phys_libvirt_pool.connect(conn)
+    end
 
     begin
-      lvm_libvirt_pool = LibvirtPool.factory(lvm_db_pool)
-      lvm_libvirt_pool.connect(conn)
+      libvirt_pool = LibvirtPool.factory(db_pool)
 
       begin
-        vol_xml = build_libvirt_vol_xml(lvm_db_volume.lv_name,
-                                        lvm_db_volume.size,
-                                        lvm_db_volume.lv_owner_perms,
-                                        lvm_db_volume.lv_group_perms,
-                                        lvm_db_volume.lv_mode_perms)
+        libvirt_pool.connect(conn)
 
-        lvm_libvirt_pool.create_vol_xml(vol_xml.to_s)
+        libvirt_pool.create_vol(*db_volume.volume_create_params)
+        db_volume.state = StorageVolume::STATE_AVAILABLE
+        db_volume.save!
+
+        db_pool.state = StoragePool::STATE_AVAILABLE
+        db_pool.save!
       ensure
-        lvm_libvirt_pool.shutdown
+        libvirt_pool.shutdown
       end
     ensure
-      phys_libvirt_pool.shutdown
+      if db_volume[:type] == "LvmStorageVolume"
+        phys_libvirt_pool.shutdown
+      end
     end
   ensure
     conn.close
@@ -259,34 +247,30 @@ end
 def delete_volume(task)
   puts "delete_volume"
 
-  lvm_db_volume = task.storage_volume
-  if lvm_db_volume == nil
-    raise "Could not find storage volume to delete"
-  end
-  if lvm_db_volume[:type] != "LvmStorageVolume"
-    raise "The volume to delete must be of type LvmStorageVolume, not type #{lvm_db_volume[:type]}"
+  db_volume = task.storage_volume
+  if db_volume == nil
+    raise "Could not find storage volume to create"
   end
 
-  lvm_db_pool = lvm_db_volume.storage_pool
-  if lvm_db_pool == nil
+  db_pool = db_volume.storage_pool
+  if db_pool == nil
     raise "Could not find storage pool"
   end
-  if lvm_db_pool[:type] != "LvmStoragePool"
-    raise "The pool for the volume must be of type LvmStoragePool, not type #{lvm_db_pool[:type]}"
-  end
 
-  conn = storage_find_suitable_host(lvm_db_pool.hardware_pool_id)
+  conn = storage_find_suitable_host(db_pool.hardware_pool_id)
 
   begin
-    phys_libvirt_pool = get_libvirt_pool_from_volume(lvm_db_volume)
-    phys_libvirt_pool.connect(conn)
+    if db_volume[:type] == "LvmStorageVolume"
+      phys_libvirt_pool = get_libvirt_lvm_pool_from_volume(db_volume)
+      phys_libvirt_pool.connect(conn)
+    end
 
     begin
-      lvm_libvirt_pool = LibvirtPool.factory(lvm_db_pool)
-      lvm_libvirt_pool.connect(conn)
+      libvirt_pool = LibvirtPool.factory(db_pool)
+      libvirt_pool.connect(conn)
 
       begin
-        libvirt_volume = lvm_libvirt_pool.lookup_vol_by_name(lvm_db_volume.lv_name)
+        libvirt_volume = libvirt_pool.lookup_vol_by_name(db_volume.read_attribute(db_volume.volume_name))
         # FIXME: we actually probably want to zero out the whole volume here, so
         # we aren't potentially leaking data from one user to another.  There
         # are two problems, though:
@@ -296,16 +280,22 @@ def delete_volume(task)
         # off another thread to do it
         libvirt_volume.delete
 
-        # FIXME: we really should be using lvm_db_volume.destroy here, but when
-        # I tried it I ran into some "Stale db reference" errors with
-        # ActiveRecord.  sseago thinks this could be indicative of a deeper
-        # error, so we need to investigate it more
-        LvmStorageVolume.delete(lvm_db_volume.id)
+        # Note: we have to nil out the task_target because when we delete the
+        # volume object, that also deletes all dependent tasks (including this
+        # one), which leads to accessing stale tasks.  Orphan the task, then
+        # delete the object; we can clean up orphans later (or not, depending
+        # on the audit policy)
+        task.task_target = nil
+        task.save!
+
+        db_volume.destroy
       ensure
-        lvm_libvirt_pool.shutdown
+        libvirt_pool.shutdown
       end
     ensure
-      phys_libvirt_pool.shutdown
+      if db_volume[:type] == "LvmStorageVolume"
+        phys_libvirt_pool.shutdown
+      end
     end
   ensure
     conn.close
diff --git a/src/task-omatic/task_vm.rb b/src/task-omatic/task_vm.rb
index 38aa7ab..35d78f2 100644
--- a/src/task-omatic/task_vm.rb
+++ b/src/task-omatic/task_vm.rb
@@ -74,7 +74,7 @@ def connect_storage_pools(conn, storage_volumes)
     # we have to special case LVM pools.  In that case, we need to first
     # activate the underlying physical device, and then do the logical one
     if volume[:type] == "LvmStorageVolume"
-      phys_libvirt_pool = get_libvirt_pool_from_volume(volume)
+      phys_libvirt_pool = get_libvirt_lvm_pool_from_volume(volume)
       phys_libvirt_pool.connect(conn)
     end
 
@@ -398,24 +398,36 @@ def start_vm(task)
       end
     end
 
-    conn = Libvirt::open("qemu+tcp://" + host.hostname + "/system")
-
     volumes = []
     volumes += vm.storage_volumes
     volumes << image_volume if image_volume
-    storagedevs = connect_storage_pools(conn, volumes)
-
-    # FIXME: get rid of the hardcoded bridge
-    xml = create_vm_xml(vm.description, vm.uuid, vm.memory_allocated,
-                        vm.memory_used, vm.num_vcpus_allocated, vm.boot_device,
-                        vm.vnic_mac_addr, "ovirtbr0", storagedevs)
 
-    dom = conn.define_domain_xml(xml.to_s)
-    dom.create
+    conn = Libvirt::open("qemu+tcp://" + host.hostname + "/system")
 
-    setVmVncPort(vm, dom)
+    begin
+      storagedevs = connect_storage_pools(conn, volumes)
 
-    conn.close
+      dom = nil
+      begin
+        # FIXME: get rid of the hardcoded bridge
+        xml = create_vm_xml(vm.description, vm.uuid, vm.memory_allocated,
+                            vm.memory_used, vm.num_vcpus_allocated,
+                            vm.boot_device, vm.vnic_mac_addr, "ovirtbr0",
+                            storagedevs)
+        dom = conn.define_domain_xml(xml.to_s)
+        dom.create
+
+        setVmVncPort(vm, dom)
+      rescue
+        if dom != nil
+          dom.undefine
+        end
+        teardown_storage_pools(conn)
+        raise ex
+      end
+    ensure
+      conn.close
+    end
   rescue => ex
     setVmState(vm, vm_orig_state)
     raise ex
diff --git a/src/task-omatic/taskomatic.rb b/src/task-omatic/taskomatic.rb
index bd7b5a4..ce37058 100755
--- a/src/task-omatic/taskomatic.rb
+++ b/src/task-omatic/taskomatic.rb
@@ -90,6 +90,8 @@ loop do
     get_credentials
 
     task.time_started = Time.now
+    task.state = Task::STATE_RUNNING
+    task.save!
 
     state = Task::STATE_FINISHED
     begin
@@ -123,6 +125,14 @@ loop do
     task.state = state
     task.time_ended = Time.now
     task.save!
+    puts "done"
+  end
+
+  # FIXME: here, we clean up "orphaned" tasks.  These are tasks that we had
+  # to orphan (set task_target to nil) because we were deleting the object they
+  # depended on.
+  Task.find(:all, :conditions => [ "task_target_id IS NULL and task_target_type IS NULL" ]).each do |task|
+    task.destroy
   end
   
   # we could destroy credentials, but another process might be using them (in
diff --git a/src/task-omatic/utils.rb b/src/task-omatic/utils.rb
index 26516ae..46a7e1b 100644
--- a/src/task-omatic/utils.rb
+++ b/src/task-omatic/utils.rb
@@ -13,7 +13,7 @@ def all_storage_pools(conn)
   return all_pools
 end
 
-def get_libvirt_pool_from_volume(db_volume)
+def get_libvirt_lvm_pool_from_volume(db_volume)
   phys_volume = StorageVolume.find(:first, :conditions =>
                                    [ "lvm_pool_id = ?", db_volume.storage_pool_id])
 
@@ -56,8 +56,8 @@ class LibvirtPool
 
     if @remote_pool == nil
       @remote_pool = conn.define_storage_pool_xml(@xml.to_s)
-      # we need this because we don't want to "build" LVM pools, which would
-      # destroy existing data
+      # we need this because we don't necessarily want to "build" LVM pools,
+      # which might destroy existing data
       if @build_on_start
         @remote_pool.build
       end
@@ -84,8 +84,16 @@ class LibvirtPool
     return @remote_pool.lookup_volume_by_name(name)
   end
 
-  def create_vol_xml(xml)
-    return @remote_pool.create_vol_xml(xml)
+  def create_vol(type, name, size, owner, group, mode)
+    @vol_xml = Document.new
+    @vol_xml.add_element("volume", {"type" => type})
+    @vol_xml.root.add_element("name").add_text(name)
+    @vol_xml.root.add_element("capacity", {"unit" => "K"}).add_text(size.to_s)
+    @vol_xml.root.add_element("target")
+    @vol_xml.root.elements["target"].add_element("permissions")
+    @vol_xml.root.elements["target"].elements["permissions"].add_element("owner").add_text(owner)
+    @vol_xml.root.elements["target"].elements["permissions"].add_element("group").add_text(group)
+    @vol_xml.root.elements["target"].elements["permissions"].add_element("mode").add_text(mode)
   end
 
   def shutdown
@@ -107,7 +115,21 @@ class LibvirtPool
     elsif pool[:type] == "NfsStoragePool"
       return NFSLibvirtPool.new(pool.ip_addr, pool.export_path)
     elsif pool[:type] == "LvmStoragePool"
-      return LVMLibvirtPool.new(pool.vg_name)
+      # OK, if this is LVM storage, there are two cases we need to care about:
+      # 1) this is a LUN with LVM already on it.  In this case, all we need to
+      #    do is to create a new LV (== libvirt volume), and be done with it
+      # 2) this LUN is blank, so there is no LVM on it already.  In this
+      #    case, we need to pvcreate, vgcreate first (== libvirt pool build),
+      #    and *then* create the new LV (== libvirt volume) on top of that.
+      #
+      # We can tell the difference between an LVM Pool that exists and one
+      # that needs to be created based on the value of the pool.state; 
+      # if it is PENDING_SETUP, we need to create it first
+      phys_volume = StorageVolume.find(:first, :conditions =>
+                                       [ "lvm_pool_id = ?", pool.id])
+
+      return LVMLibvirtPool.new(pool.vg_name, phys_volume.path,
+                                pool.state == StoragePool::STATE_PENDING_SETUP)
     else
       raise "Unknown storage pool type " + pool[:type].to_s
     end
@@ -151,6 +173,21 @@ class NFSLibvirtPool < LibvirtPool
     @xml.root.elements["target"].elements["path"].text = "/mnt/" + @name
   end
 
+  def create_vol(name, size, owner, group, mode)
+    # FIXME: this can actually take some time to complete (since we aren't
+    # doing sparse allocations at the moment).  During that time, whichever
+    # libvirtd we chose to use is completely hung up.  The solution is 3-fold:
+    # 1.  Allow sparse allocations in the WUI front-end
+    # 2.  Make libvirtd multi-threaded
+    # 3.  Make taskomatic multi-threaded
+    super("netfs", name, size, owner, group, mode)
+
+    # FIXME: we have to add the format as raw here because of a bug in libvirt;
+    # if you specify a volume with no format, it will crash libvirtd
+    @vol_xml.root.elements["target"].add_element("format", {"type" => "raw"})
+    @remote_pool.create_vol_xml(@vol_xml.to_s)
+  end
+
   def xmlequal?(docroot)
     return (docroot.attributes['type'] == @type and
             docroot.elements['source'].elements['host'].attributes['name'] == @host and
@@ -159,17 +196,23 @@ class NFSLibvirtPool < LibvirtPool
 end
 
 class LVMLibvirtPool < LibvirtPool
-  def initialize(vg_name)
+  def initialize(vg_name, device, build_on_start)
     super('logical', vg_name)
 
     @type = 'logical'
-    @build_on_start = false
+    @build_on_start = build_on_start
 
     @xml.root.elements["source"].add_element("name").add_text(@name)
+    @xml.root.elements["source"].add_element("device", {"path" => device})
 
     @xml.root.elements["target"].elements["path"].text = "/dev/" + @name
   end
 
+  def create_vol(name, size, owner, group, mode)
+    super("logical", name, size, owner, group, mode)
+    @remote_pool.create_vol_xml(@vol_xml.to_s)
+  end
+
   def xmlequal?(docroot)
     return (docroot.attributes['type'] == @type and
             docroot.elements['name'].text == @name and

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]