[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [libvirt] [PATCH 2/2] qemu: Always use iohelper for dumping domain core



On 02/07/2012 05:09 AM, Jiri Denemark wrote:
> On Mon, Feb 06, 2012 at 15:43:59 -0700, Eric Blake wrote:
>> On 02/06/2012 07:51 AM, Jiri Denemark wrote:
>>> Qemu uses non-blocking I/O which doesn't play nice with regular file
>>> descriptors. We need to pass a pipe to qemu instead, which can easily be
>>> done using iohelper.
>>> ---
>>>  src/qemu/qemu_driver.c |    3 +--
>>>  1 files changed, 1 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
>>> index 01dbcd7..86d3058 100644
>>> --- a/src/qemu/qemu_driver.c
>>> +++ b/src/qemu/qemu_driver.c
>>> @@ -2960,8 +2960,7 @@ doCoreDump(struct qemud_driver *driver,
>>>                             NULL, NULL)) < 0)
>>>          goto cleanup;
>>>  
>>> -    if (bypass_cache &&
>>> -        (wrapperFd = virFileWrapperFdNew(&fd, true, path)) == NULL)
>>> +    if (!(wrapperFd = virFileWrapperFdNew(&fd, bypass_cache, path)))
>>>          goto cleanup;
>>
>> What you have is right (and amazingly short for the bug fix at hand!),
>> but you missed the core dump fd creation path, which needs the same
>> treatment.
> 
> This is as far as I can see the only core dump fd creation path so I'm a bit
> confused.

Ah, then I misread the @@ hunk lines.  I knew you had only touched one
of two places that created an fd, but then spoke about the wrong hunk of
the two as being the one that you missed.  My apologies for compounding
my confusion onto the list.

There are two situations where we save to file - one during core dump
(which, as you point out, can be live where the guest continues to run
after the dump, and therefore must be interruptible), and one during
'virsh save' or 'virsh managedsave' (which is currently non-live - we
pause the guest, do the migration, then kill qemu; resuming requires
starting a new qemu).

In both situations, my understanding is that qemu does throttling based
on whether it gets EAGAIN from the fd it is writing to, and that it
doesn't regain responsiveness if it doesn't get EAGAIN.  For the dump
case, where we might have requested the live flag, we need the
responsiveness.  I'm just worried whether the save case also needs to
present an EAGAIN interface to qemu (it would be nice if we didn't have
to rely on that, so that we could change the throttling speeds - after
all, we had to revert the patch that provided unlimited throttling when
saving to disk because qemu didn't handle it well).

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]