[libvirt] Cache and memory bandwidth graphs: native versus guest

Bill Gray bgray at redhat.com
Fri May 13 19:33:21 UTC 2011


See attachment with two graphs: (1) cache bandwidth, (2) blowup of 
sustained memory bandwidth region...

- X axis has a log scale

- Light blue line is an older system with 32K L1 and 6M L2 caches

- All other measurements on perf34: 32K L1, 256K L2, 30M L3 caches

- Majority of variation in L1 cache region is from the two guest 
measurements done with no taskset to a VCPU: yellow and maroon lines. 
Perhaps this reflects the test bouncing between VCPUs in the guest.

- The sustained memory bandwidth for the guest with no pinning is only 
80% of native (maroon line), which motivates more convenient and 
comprehensive numactl for guests.

- Virtualized bandwidth is otherwise nearly in line with native, which 
confirms the importance of the virtual CPUID communicating actual native 
cache sizes to cache-size-aware guest applications, since guest apps 
could benefit from the full size of the native cache.  (Guest was 
started with "-cpu host", but lscpu in guest showed 4M cache despite 
actual 30M cache.)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: perf34_compare_cache_bandwidth.ods
Type: application/vnd.oasis.opendocument.spreadsheet
Size: 52880 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/libvir-list/attachments/20110513/bb7a6b3a/attachment-0001.ods>


More information about the libvir-list mailing list