[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [libvirt] [resend v2 0/7] Support cache tune in libvirt

Eli Qiao
Sent with Sparrow

On Tuesday, 7 February 2017 at 3:03 AM, Marcelo Tosatti wrote:

On Mon, Feb 06, 2017 at 01:33:09PM -0200, Marcelo Tosatti wrote:
On Mon, Feb 06, 2017 at 10:23:35AM +0800, Eli Qiao wrote:
This series patches are for supportting CAT featues, which also
called cache tune in libvirt.

First to expose cache information which could be tuned in capabilites XML.
Then add new domain xml element support to add cacahe bank which will apply
on this libvirt domain.

This series patches add a util file `resctrl.c/h`, an interface to talk with
linux kernel's sys fs.

There are still some TODOs such as expose new public interface to get free
cache information.

Some discussion about this feature support can be found from:

Two comments:

1) Please perform appropriate filesystem locking when accessing
resctrlfs, as described at:



<cachetune id='10' host_id='1' type='l3' size='3072' unit='KiB'/>

[b4c270b5-e0f9-4106-a446-69032872ed7d]# cat tasks
[b4c270b5-e0f9-4106-a446-69032872ed7d]# pstree -p | grep qemu
| |-{qemu-kvm}(8692)
| `-{qemu-kvm}(8693)

Should add individual vcpus to the "tasks" file, not the main QEMU

The NFV usecase requires exclusive CAT allocation for the vcpu which
runs the sensitive workload.


<cachetune id='10' host_id='1' type='l3' size='3072' unit='KiB'/>

Adds all vcpus that are pinned to the socket which cachebank with

<cachetune id='10' host_id='1' type='l3' size='3072' unit='KiB' vcpus=2,3/>

Adds PID of vcpus 2 and 3 to resctrl directory created for this

Hmm.. in this case, we need to figure out what’s the pid of vcpus=2 and vcpu=3 and added them to the resctrl directory.
currently, I create a resctrl directory(called resctrl domain) for a VM so just put all task ids to it.

this is my though:

let say the vm has vcpus=0 1 2 3, and you want to let 0, 1 benefit cache on host_id=0, and 2, 3 on host_id=1

you will do:

    pin vcpus 0, 1 on the cpus of socket 0 
    pin vcpus 2, 3 on the cpus of socket 1
this can be done in vcputune

2) define cache tune like this:
<cachetune id='0' host_id=‘0' type='l3' size='3072' unit='KiB'/>
<cachetune id='1' host_id='1' type='l3' size='3072' unit='KiB'/>

in libvirt:
we create a resctrl directory naming with the VM’s uuid
and set schemata for each socket 0, and socket 1, put all qemu tasks ids into tasks file, this will work fine. 
please be note that in a resctrl directory, we can define schemata for each socket id separately.
3) CDP / non-CDP convertion.

In case the size determination has been performed with non-CDP,
to emulate such allocation on a CDP host,
it would be good to allow both code and data allocations to share
the CBM space:

IOM, I don’t think it’s good to have this.
in libvirt capabilities xml, the application will get to know if the host support cdp or not.

<cachetune id='10' host_id='1' type='l3data' size='3072' unit='KiB'/>
<cachetune id='10' host_id='1' type='l3code' size='3072' unit='KiB'/>

Perhaps if using the same ID?
I am open to hear about what other’s say, 

Other than that, testing looks good.
Thanks for the testing.

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]