[libvirt] [PATCH 0/6 v3] Add blkio cgroup support

Dominik Klein dk at in-telegence.net
Fri Feb 18 13:42:51 UTC 2011


Hi

back with some testing results.

>> how about the start Guest with option "cache=none" to bypass pagecache?
>> This should help i think.
> 
> I will read up on where to set that and give it a try. Thanks for the hint.

So here's what I did and found out:

The host system has 2 12 core CPUs and 128 GB of Ram.

I have 8 test VMs named kernel1 to kernel8. Each VM has 4 VCPUs, 2 GB of
RAm and one disk, which is an lv on the host. Cache mode is "none":

for vm in kernel1 kernel2 kernel3 kernel4 kernel5 kernel6 kernel7
kernel8; do virsh dumpxml $vm|grep cache; done
      <driver name='qemu' type='raw' cache='none'/>
      <driver name='qemu' type='raw' cache='none'/>
      <driver name='qemu' type='raw' cache='none'/>
      <driver name='qemu' type='raw' cache='none'/>
      <driver name='qemu' type='raw' cache='none'/>
      <driver name='qemu' type='raw' cache='none'/>
      <driver name='qemu' type='raw' cache='none'/>
      <driver name='qemu' type='raw' cache='none'/>

My goal is to give more I/O time to kernel1 and kernel2 than to the rest
of the VMs.

mount -t cgroup -o blkio none /mnt
cd /mnt
mkdir important
mkdir notimportant

echo 1000 > important/blkio.weight
echo 100 > notimportant/blkio.weight
for vm in kernel3 kernel4 kernel5 kernel6 kernel7 kernel8; do
cd /proc/$(pgrep -f "qemu-kvm.*$vm")/task
for task in *; do
/bin/echo $task > /mnt/notimportant/tasks
done
done

for vm in kernel1 kernel2; do
cd /proc/$(pgrep -f "qemu-kvm.*$vm")/task
for task in *; do
/bin/echo $task > /mnt/important/tasks
done
done

Then I used cssh to connect to all 8 VMs and execute
dd if=/dev/zero of=testfile bs=1M count=1500
in all VMs simultaneously.

Results are:
kernel1: 47.5593 s, 33.1 MB/s
kernel2: 60.1464 s, 26.2 MB/s
kernel3: 74.204 s, 21.2 MB/s
kernel4: 77.0759 s, 20.4 MB/s
kernel5: 65.6309 s, 24.0 MB/s
kernel6: 81.1402 s, 19.4 MB/s
kernel7: 70.3881 s, 22.3 MB/s
kernel8: 77.4475 s, 20.3 MB/s

Results vary a little bit from run to run, but it is nothing
spectacular, as weights of 1000 vs. 100 would suggest.

So I went and tried to throttle I/O of kernel3-8 to 10MB/s instead of
weighing I/O. First I rebooted everything so that no old configuration
of cgroup was left in place and then setup everything except the 100 and
1000 weight configuration.

quote from blkio.txt:
------------
- blkio.throttle.write_bps_device
        - Specifies upper limit on WRITE rate to the device. IO rate is
          specified in bytes per second. Rules are per deivce. Following is
          the format.

  echo "<major>:<minor>  <rate_bytes_per_second>" >
/cgrp/blkio.write_bps_device
-------------

for vm in kernel1 kernel2 kernel3 kernel4 kernel5 kernel6 kernel7
kernel8; do ls -lH /dev/vdisks/$vm; done
brw-rw---- 1 root root 254, 23 Feb 18 13:45 /dev/vdisks/kernel1
brw-rw---- 1 root root 254, 24 Feb 18 13:45 /dev/vdisks/kernel2
brw-rw---- 1 root root 254, 25 Feb 18 13:45 /dev/vdisks/kernel3
brw-rw---- 1 root root 254, 26 Feb 18 13:45 /dev/vdisks/kernel4
brw-rw---- 1 root root 254, 27 Feb 18 13:45 /dev/vdisks/kernel5
brw-rw---- 1 root root 254, 28 Feb 18 13:45 /dev/vdisks/kernel6
brw-rw---- 1 root root 254, 29 Feb 18 13:45 /dev/vdisks/kernel7
brw-rw---- 1 root root 254, 30 Feb 18 13:45 /dev/vdisks/kernel8

/bin/echo 254:25 10000000 >
/mnt/notimportant/blkio.throttle.write_bps_device
/bin/echo 254:26 10000000 >
/mnt/notimportant/blkio.throttle.write_bps_device
/bin/echo 254:27 10000000 >
/mnt/notimportant/blkio.throttle.write_bps_device
/bin/echo 254:28 10000000 >
/mnt/notimportant/blkio.throttle.write_bps_device
/bin/echo 254:29 10000000 >
/mnt/notimportant/blkio.throttle.write_bps_device
/bin/echo 254:30 10000000 >
/mnt/notimportant/blkio.throttle.write_bps_device
/bin/echo 254:30 10000000 >
/mnt/notimportant/blkio.throttle.write_bps_device

Then I ran the previous test again. This resulted in an ever increasing
load (last I checked was ~ 300) on the host system. (This is perfectly
reproducible).

uptime

                                                   Fri Feb 18 14:42:17 2011

 14:42:17 up 12 min,  9 users,  load average: 286.51, 142.22, 56.71

So, at least in my case, it does not seem to work too well (yet).

Regards
Dominik




More information about the libvir-list mailing list