[libvirt] RFC: Introducing Memory Bandwidth Monitoring (MBM)

Huaqiang,Wang huaqiang.wang at intel.com
Fri Apr 26 06:48:47 UTC 2019


RFC: Introducing Memory Bandwidth Monitoring (MBM)
==================================================

Kernel has removed the interfaces of getting memory bandwidth utilization
interfaces from `perf` sub-module, refer to
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=c39a0e2c8850f08249383f2425dbd8dbe4baad69.

Afterwards, kernel implements the `resctrl` sub-module and provides the
alternative interfaces to allocate/monitor resource of cache/memory 
bandwidth.

The cache monitoring based on `resctr` interface is already implemented, 
this
RFC is to raise discussion on implement the memory bandwidth monitoring
functionality.

Very similar to the way of implementing 'cache monitoring', to support 
memory
bandwidth monitoring, it could be considered in three aspects:

     * Identifying the host capability for MBM
     * Creating MBM groups
     * Reporting the result of the monitoring

Identify the host capability of MBM
-----------------------------------

Host MBM feature coud be identified through the standard way of enumerating
host capability, such as command `virsh capability <domain>`.

A host with MBM, the result of command above may contain following lines::

   <capabilities>
     <host>
     …
       <cache>
         <bank id='0' level='3' type='both' size='15' unit='MiB' cpus='0-5'>
           <control granularity='768' min='1536' unit='KiB' type='both' 
maxAllocs='4'/>
         </bank>
         <monitor level='3' reuseThreshold='270336' maxMonitors='176'>
           <feature name='llc_occupancy'/>
         </monitor>
       </cache>
       <memory_bandwidth>
         <node id='0' cpus='0-5'>
           <control granularity='10' min ='10' maxAllocs='4'/>
         </node>
         <node id='1' cpus='6-11'>
           <control granularity='10' min ='10' maxAllocs='4'/>
         </node>
+        <monitor maxMonitors='176'>
+         <feature name='mbm_total_bytes'/>
+         <feature name='mbm_local_bytes'/>
+        </monitor>
       </memory_bandwidth>
     </host>
   </capabilities>

The <monitor> element under <memory_bandwidth> indicates a memory bandwidth
monitor exists, and the property 'maxMonitors' tells how many monitors 
could be
generated, which is shared with cache monitor.

The feature list inside the <monitor> element indicates MBM feature set.

Create the MBM group
--------------------

It is supported to create multiple monitors to a vm instance and each 
monitor
is capable to track the memory bandwidth utilization accumulatively in bytes
for specific vCPUs.

To create a memory bandwidth monitor for specific vCPUs, for example 
create 2
memory bandwidth monitors, one is tracking the memory BW usage for vCPU0 and
another is for vCPU 0-4, then you can set the request for two monitors in
following way by editing the domain XML file::

           <cputune>
             <memorytune vcpus='0-4'>
               <node id='0' bandwidth='20'/>
               <node id='1' bandwidth='30'/>
   +           <monitor vcpus='0-4'/>
   +           <monitor vcpus='0'/>
             </memorytune>

           </cputune>

Report the result of monitor
----------------------------

The statistical results of memory bandwidth monitors could be demonstrated
through command `virsh domstat <doman>` in following arrangement.

virsh domstats <domain>::

   ...
   memory.bw.monitor.count=<total number of MBM monitors>
   memory.bw.monitor.<monitor index>.name=<resctrl group name for this 
monitor>
   memory.bw.monitor.<monitor index>.vcpus=<vcpu set list monitored by 
this monitor>
   memory.bw.monitor.<monitor index>.controller.count=<memory controller 
number>
   memory.bw.monitor.<monitor index>.controller.<index>.local.bytes=<bytes>
   memory.bw.monitor.<monitor index>.controller.<index>.total.bytes=<bytes>
   ...




More information about the libvir-list mailing list