[Libguestfs] Libguestfs can't launch with one of the disk images in the RHEV cluster

Исаев Виталий Анатольевич isaev at fintech.ru
Tue Jan 21 11:46:45 UTC 2014



-----Original Message-----
From: Federico Simoncelli [mailto:fsimonce at redhat.com]
Sent: Friday, January 17, 2014 9:04 PM
To: libguestfs at redhat.com
Cc: Исаев Виталий Анатольевич
Subject: Re: [Libguestfs] Libguestfs can't launch with one of the disk images in the RHEV cluster



----- Original Message -----

> From: "Richard W.M. Jones" <rjones at redhat.com<mailto:rjones at redhat.com>>

> To: "Исаев Виталий Анатольевич" <isaev at fintech.ru<mailto:isaev at fintech.ru>>

> Cc: libguestfs at redhat.com<mailto:libguestfs at redhat.com>

> Sent: Tuesday, January 14, 2014 6:42:23 PM

> Subject: Re: [Libguestfs] Libguestfs can't launch with one of the disk

> images in the RHEV cluster

>

> On Tue, Jan 14, 2014 at 02:57:35PM +0000, Исаев Виталий Анатольевич wrote:

> This works because you're accessing the backing disk, not the top

> disk.  Since the backing disk (in this case) doesn't itself have a

> backing disk, qemu has no problem opening it.

>

> > Now I’m a little bit confused with the results of my research. I

> > found that VM with the only disk attached has at least two block

> > devices mapped to the hypervisor’s file system in fact – I mean

> > /dev/dm-19 (raw) and /dev/dm-30 (qcow2). The RHEV-M API (aka Python

> > oVirt SDK) provides no info about the first one, but the second

> > cannot be accessed from libguestfs.  I have an urgent need to work

> > with a chosen VM disk images through the libguestfs layer, but I

> > don’t know which images belong to every VM exactly. It seems like

> > I’m going the hard way :) Sincerely,

>

> Basically you need to find out which directory RHEV-M itself starts

> qemu in.  Try going onto the node and doing:

>

>   ps ax | grep qemu

>   ls -l /proc/PID/cwd

>

> substituting PID for some of the qemu process IDs.

>

> My guess would be some subdirectory of /rhev/data-center/mnt/blockSD/



Yes, the full path to the images is:



/rhev/data-center/mnt/blockSD/<sdUUID>/images/<imgUUID>/



sdUUID is the uuid of the storage domain (vg name) where the images are stored and imgUUID is the uuid of the image (as reported in the "Disks" tab in the webadmin).



The symlinks and the LVs are managed by vdsm (activated when the VM starts, deactivated as soon as the VM is stopped).



If you just want to access these images once and you're sure that they are not in use you could activate the relevant LVs and create the symlinks yourself on any machine (no need of oVirt/vdsm), e.g.



/tmp/ovirt-images/<imgUUID>/<volUUID1> -> /dev/dm-xx  /tmp/ovirt-images/<imgUUID>/<volUUID2> -> /dev/dm-xx  /tmp/ovirt-images/<imgUUID>/<volUUID3> -> /dev/dm-xx



this could even be automated eventually (from the qemu-img output).

If instead you're trying to integrate with oVirt and you need a more reliable/automated solution I invite you start a thread on the oVirt mailing list.



The imgUUIDs can be found also with: lvs -o +tags looking for the tags starting with "IU_".



--

Federico



Hello, Federico, thank you for your reply and invitation.



I'm working on the project of RHEV enforcement. I’m trying to develop a program that will monitor the integrity of the files stored on the virtual machine disk images. There are several conditions for the RHEV environment:

1.       We have a fixed number of the VMs that will not change after the RHEV HA cluster deployment;

2.       VM’s key system directories [‘/bin’, ’/sbin’, ’/lib’, ’boot’ etc.] content must not change after cluster deployment;

3.       Checksums of the files from system directories must be compared to the initial checksums (computed after the cluster deployment) every time when VM is starting;



So what I am trying to do is just:

1.       To define which disk belongs to every VM;                                                                                                                                                                                    //python oVirt SDK

2.       To write deployment script that will compute the initial checksums of the VMs;                                                                                                          //libguestfs

3.       To store these sums on every RHEV-H (ovirt-node in terms of oVirt?) locally;

4.       To write hook script that will provide the comparation of existing checksums and initial checksums every time when VM is starting.   //vdsm hooks, libguestfs



So you can see that libguestfs is a base tool for this part of system but in some cases it cannot work with disk images properly (I described them here: https://bugzilla.redhat.com/show_bug.cgi?id=1053684, https://www.redhat.com/archives/libguestfs/2014-January/msg00175.html). Probably I use the library in the wrong way and this causes all the issues…



Honestly speaking I don’t know if the functionality that I have just described should integrate with oVirt or it rather should be a standalone solution. I would be glad if you could comment this problem.

Thank you.



Sincerely,
Vitaly Isaev
Software engineer
Information security department
Fintech JSC, Moscow, Russia



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/libguestfs/attachments/20140121/52982b22/attachment.htm>


More information about the Libguestfs mailing list