[libvirt] [PATCH 2/4] Add a <graphics> type for SPICE protocol

Daniel P. Berrange berrange at redhat.com
Mon Oct 19 16:18:40 UTC 2009


On Mon, Oct 19, 2009 at 06:10:08PM +0200, Dan Kenigsberg wrote:
> On Mon, Oct 19, 2009 at 04:52:10PM +0100, Daniel P. Berrange wrote:
> > On Mon, Oct 19, 2009 at 05:47:47PM +0200, Dan Kenigsberg wrote:
> > > Thanks for the patches (and sorry for the late response).
> > > 
> > > The patches are fine, though I am still missing a means to control which
> > > of spice channels are to be encrypted. Also missing is a way to set
> > > key/cert files and cipher suite per domain. (I aksed spice folks to
> > > avoid this problem by using reasonable defaults). More of a problem is
> > > the need to set/reset spice "ticket" (one time password), or at least
> > > disable it completely on the command line (with ,disable-ticketing).
> > 
> > For VNC, we leave  key/cert file configuration to be done per-host
> > in /etc/libvirt/qemu.conf.  I was anticipating the same for SPICE,
> > with a spice_tls_x509_cert_dir  parameter, to match the existing
> > vnc_tls_x509_cert_dir parameter. We could easily add cipher suite
> > parameters to the config too if there's a compelling need to use a
> > non-default setting.
> > 
> > IIRC, there's an open RFE for the 'ticket' stuff already. 
> 
> you are correct.
> 
> > 
> > > On Tue, Sep 29, 2009 at 04:43:51PM +0100, Daniel P. Berrange wrote:
> > > > This supports the -qxl argument in RHEL-5's fork of KVM
> > > > which has SPICE support. QXL is a graphics card, but
> > > > inexplicably doesn't use the standard -vga syntax for
> > > > generic configuration. Also -qxl is rather useless unless
> > > > you also supply -spice (coming in next patch)
> > > 
> > > > +
> > > > +                if (virAsprintf(&optstr, "%u,ram=%u",
> > > > +                                def->videos[0]->heads,
> > > > +                                (def->videos[0]->vram /1024)) < 0)
> > > > +                    goto no_memory;
> > > 
> > > this hides spice's own default, and sends ",ram=0" if xml lacks vram
> > > attribute. I think it would be better to drop ",ram" completely if
> > > vram==0.
> > 
> > hmm, I missed something somewhere then, because our XML parser should
> > always set a default vram value if it is omitted, so you should never
> > get a ram=0 flag.
> 
> What I am saying is that hiding spice's default with adding
> 
> diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
> index 27fcdf1..67b9e2a 100644
> --- a/src/conf/domain_conf.c
> +++ b/src/conf/domain_conf.c
> @@ -1798,6 +1798,9 @@ virDomainVideoDefaultRAM(virDomainDefPtr def,
>          /* Original Xen PVFB hardcoded to 4 MB */
>          return 4 * 1024;
>  
> +    case VIR_DOMAIN_VIDEO_TYPE_QXL:
> +        return 64 * 1024;
> +
>      default:
>          return 0;
>      }
> 
> seems suboptimal to me.

There is no good solution here - we wanted to be able to expose to apps
what amount of video RAM is being used, since this RAM is counted ontop
of normal guest RAM & thus important to know about when figuring out if
you are overcommitting a host or not. Ultimately QEMU needs to be able
to tell us what its defaults are so we can avoid this problem. 

Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|




More information about the libvir-list mailing list