[Ovirt-devel] LVM Fun

Ian Main imain at redhat.com
Tue Dec 23 21:13:27 UTC 2008


On Tue, 23 Dec 2008 18:55:53 +0100
Chris Lalancette <clalance at redhat.com> wrote:

> Ian Main wrote:
> > OK, so in my refactoring of taskomatic, I've discovered just how crazy LVM
> > partitions are.
> > 
> > Basically there is no way that I can see (and hopefully I'm wrong) of
> > determining what underlying hardware pool or volume is associated with an LVM
> > pool or volume.  I think in the current taskomatic they attempt to figure
> > some of this out when doing a storage refresh, but it's actually not correct
> > and still relies on information from the database.
> > 
> > Here's the code in taskomatic:
> > 
> > logical_xml = conn.discover_storage_pool_sources("logical")
> > 
> > Document.new(logical_xml).elements.each('sources/source') do |source| vgname
> > = source.elements["name"].text
> > 
> > begin source.elements.each("device") do |device| byid_device =
> > phys_libvirt_pool.lookup_vol_by_path(device.attributes["path"]).path end 
> > rescue # If matching any of the <device> sections in the LVM XML fails #
> > against the storage pool, then it is likely that this is a storage # pool not
> > associated with the one we connected above.  Go on # FIXME: it would be nicer
> > to catch the right exception here, and # fail on other exceptions puts "One
> > of the logical volumes in #{vgname} is not part of the pool of type
> > #{phys_db_pool[:type]} that we are scanning; ignore the previous error!" next
> >  end
> > 
> > So first we get a list of pools which are of the "logical"/LVM type, then
> > iterate through them and use lookup_vol_by_path using the physical hardware
> > pool as the pool to look in for the LVM volume.  However that is not
> > correct.. the ruby API makes it look like you are only searching in that one
> > pool when in fact lookup_vol_by_path uses the main connection pointer and is
> > a global search across all storage pools.
> > 
> > Code from libvirt-ruby:
> > 
> > /* * Call
> > +virStorageVolLookupByPath+[http://www.libvirt.org/html/libvirt-libvirt.html#virStorageVolLookupByPath]
> >  */ VALUE libvirt_pool_lookup_vol_by_path(VALUE p, VALUE path) { 
> > virStorageVolPtr vol;
> > 
> > // FIXME: Why does this take a connection, not a pool ? vol =
> > virStorageVolLookupByPath(conn(p), StringValueCStr(path)); _E(vol == NULL,
> > create_error(e_RetrieveError, "virStorageVolLookupByPath", "", conn(p)));
> > 
> > return vol_new(vol, conn_attr(p)); }
> > 
> > As you can see in the FIXME comment and the usage of
> > virStorageVolLookupByPath(), it's not associated with a specific pool.
> > 
> > So this part of the code then doesn't actually prevent us from picking up LVM
> > volumes associated with other pools which may be active on this host (most
> > likely for VMs to use).  Remember we're just picking some random host to do
> > our scan with.
> > 
> > Then afterwords we put the entries in the database:
> > 
> > source.elements.each("device") do |device| byid_device =
> > phys_libvirt_pool.lookup_vol_by_path(device.attributes["path"]).path 
> > physical_vol = StorageVolume.find(:first, :conditions => [ "path = ?",
> > byid_device]) if physical_vol == nil # Hm. We didn't find the device in the
> > storage volumes already. # something went wrong internally, and we have to
> > bail raise "Storage internal physical volume error" end
> > 
> > # OK, put the right lvm_pool_id in place physical_vol.lvm_pool_id =
> > lvm_db_pool.id physical_vol.save! end
> > 
> > If I read this right it means that if we don't have the LVM volume in the
> > database already we're lost as to what physical volume it belongs.  And again
> > we're using lookup_vol_by_path() to determine its in the pool.
> > 
> > Anyway, all this basically means we have no way to track LVM volumes from
> > libvirt and if we intend to keep using them we're just going to have to rely
> > on the database to keep track of their setup.  We could still check
> > allocations etc. but we shouldn't be attempting to fill in the database with
> > LVM information from a scan/refresh of the storage volume.
> > 
> > Make sense?  Am I missing something?
> 
> (sorry, I didn't see this until after I posted my other message).  The short of
> it is that we *have* to scan LVM during refresh time, otherwise we will not be
> able to import iSCSI LUNs that already have LVM on them.  So we have to make it
> work one way or another, we can't just rip it out.  I'm not actually sure what
> you mean by the above; the whole point of the LVM scanning is to find LVM
> volumes out on the /dev devices so we can *add* them to the database as new LVM
> volumes.  Also, we aren't picking some random host; we are picking a host that
> is in this hardware pool.  I'm don't really follow your objection; what's the
> exact problem you are running into that causes you to question all of this?
> 
> Chris Lalancette

Hehe. :)  Well we can still find all the LVM pools, it's just that we can't tell
what real storage pool backs them.  In the database we make a hierarchy where LVM
pools are created inside physical pools/volumes.  Through libvirt alone we have no
way of determining this parent/child relationship, which is what I'm trying to say
above.

This in turn opens up the possibility that another storage pool may be active
on the host we select to do the scanning, which would mean we'd pick up LVM volumes
from that other pool as well.  From what I can tell, right now the only way to fill
the LVM volume/pool information in the DB correctly is by keeping track of it 
during creation which defeats the purpose of the scan/refresh.  Hence the removal.

If I missed something (which is possible), let me know, but from what I can tell
this is the case.  While I've done some playing with the API and have shown myself
that this is the case, it might be worth setting up a test case with two pools with
LVM volumes in each and see if it sorts them out right.

	Ian




More information about the ovirt-devel mailing list