[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [virt-tools-list] Clone on started pool fails



I think I just solved the problem : my pool was defined with "/raid5/vms/" as a path, and after producing the informations that follow (I kept them here just in case), I removed the last "/" (-> "/raid5/vms").
I haven't got any errors now, but if that's the only problem, I'm sure I should ever have any :-)

Frederic.

So, the results were :


# virsh --connect qemu:///system pool-list --all
Nom                  État      Démarrage automatique
-----------------------------------------
default              actif      yes       
iso_images           actif      yes       
raid5                inactif    yes     


(Names are in french, if you need any kind of translation, please ask, but I think they are not so mysterious...)
and :

# virsh --connect qemu:///system vol-list default
Nom                  Chemin                                  
-----------------------------------------
(empty)

if raid5 is inactive :
# virsh --connect qemu:///system vol-list raid5
erreur :Impossible de lister les volumes actifs (=unable to list active volumes)
erreur :internal error storage pool is not active (=...but it's already in english!... strange :-) )


and if raid5 is active :

# virsh --connect qemu:///system vol-list raid5
Nom                  Chemin                                  
-----------------------------------------
list of files, in the form :
file_name   /raid5/vms//file_name

It actually lists all the files I have in here, including a ".tar.bz".
Note the "//" between the pool path (/raid5/vms) and the file name.

~/.virt-manager/virt-manager.log has :
(---An attempt to add a disk to a server, by selecting a newly created disk in the pool "raid5", while it is set as "started"---)
[dim., 29 nov. 2009 18:14:17 virt-manager 9997] DEBUG (VirtualDisk:737) Path '/raid5/vms' is target for pool 'raid5'. Creating volume 'test.qcow2'.
[dim., 29 nov. 2009 18:14:17 virt-manager 9997] DEBUG (error:104) Validation Error: Paramètres de stockage invalides
(i.e. "wrong storage parameters")

Then, I stopped the pool, and tried again :
[dim., 29 nov. 2009 18:14:39 virt-manager 9997] DEBUG (addhardware:874) Starting background file allocate process
[dim., 29 nov. 2009 18:14:39 virt-manager 9997] DEBUG (addhardware:876) Allocation completed
[dim., 29 nov. 2009 18:14:39 virt-manager 9997] DEBUG (addhardware:838) Adding device:
    <disk type='file' device='disk'>
      <driver name='qemu'/>
      <source file='/raid5/vms/test.qcow2'/>
      <target dev='hda' bus='ide'/>
    </disk>
[dim., 29 nov. 2009 18:14:39 virt-manager 9997] DEBUG (domain:188) Redefining 'minimal-server' with XML diff:
(XML diff)

I hope this will help you!

----- "Cole Robinson" <crobinso redhat com> a écrit :

> On 11/24/2009 08:37 AM, Frédéric Grelot wrote:
> > Hi, 
> > 
> > I just experienced an error while trying to clone a vm whose disk is
> on a pool (other than /var/lib...).
> > The error is triggered by VirtualDisk.py:731 and says :
> > ERROR    Could not determine original disk information: Size must be
> specified for non existent volume path '/raid5/vms/disk.qcow2'
> > 
> > where disk is the source vm disk, and of course exists... /raid5 is
> an lvm mount, on top of a raid5 disk array.
> > After reading the code in VirtualDisk.py, i suspected a problem with
> the pool, and by "stopping" it (in virt-manager/storage manager),
> cloning suddently works without problem!
> > I tried with the cli (virt-clone --prompt) version, as of the gui
> (virt-manager/clone), and get the exact same error...
> > When I first tried, the disk was a plain 18Gb ".img" file, when I
> realized that, i converted it to a qcow2 one (7gb).
> > 
> > By the way, I just realized that it may be linked to another problem
> (... 2s later : confirmed) : whenever I try to add a disk to a new vm,
> or when I want to create a vm, I always get an error : "Name
> 'any_disk.ext' already in use by another volume.", where any_disk.ext
> can be a .img raw disk, qcow2, etc...  ...and of course is not an
> already used disk! I just tested by "stopping" the pool (and the
> "browse local" option of virt-manager), and adding a disk works! (At
> last, I don't need to add disks by hand in /etc/libvirt/qemu/*.xml
> anymore...)
> > 
> > I'm not sure whether it is a bug or a mis-configuration, I already
> heard that I should better use only the /var/lib/...image/ directory
> as a pool, but using my /raid5 volume gives me more flexibility and
> visibility, so I'd prefer keep using it...
> > 
> > So I'll take any help to solve this, and, if you need more
> information, I'll give them to you with pleasure!
> > 
> > Frederic.
> > 
> > Pool definition :
> > <pool type='dir'>
> >   <name>raid5</name>
> >   <uuid>xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx</uuid>
> >   <capacity>0</capacity>
> >   <allocation>0</allocation>
> >   <available>0</available>
> >   <source>
> >   </source>
> >   <target>
> >     <path>/raid5/vms/</path>
> >     <permissions>
> >       <mode>0700</mode>
> >       <owner>0</owner>
> >       <group>0</group>
> >     </permissions>
> >   </target>
> > </pool>
> > 
> > 
> > Versions :
> > # rpm -qa | grep virt
> > virt-top-1.0.4-1.fc12.1.x86_64
> > virt-manager-0.8.0-7.fc12.noarch
> > virt-mem-0.3.1-9.fc12.x86_64
> > virt-viewer-0.2.0-1.fc12.x86_64
> > libvirt-python-0.7.1-15.fc12.x86_64
> > libvirt-client-0.7.1-15.fc12.x86_64
> > python-virtinst-0.500.0-5.fc12.noarch
> > libvirt-0.7.1-15.fc12.x86_64
> > virt-v2v-0.2.0-1.fc12.noarch
> > 
> 
> 
> Hmm, certainly sounds like something is going wrong here. Can you
> also
> provide:
> 
> ~/.virt-manager/virt-manager.log
> virsh --connect qemu:///system pool-list --all
> virsh --connect qemu:///system vol-list <poolname> for each running
> pool
> 
> Thanks,
> Cole


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]