[libvirt-users] libvirt possibly ignoring cache=none ?
Brano Zarnovican
zarnovican at gmail.com
Mon Aug 12 07:20:42 UTC 2013
Hi,
when the instance was OOM killed (output attached), we have
implemented Nagios check to monitor those close to the limit. However,
now we are getting false alarms, because instance(s) can get close to
the cgroup limit for valid reasons.
> However, this behavior won't change with caches. Kernel knows that
> those are data (s)he can discard so before killing the process, unneeded
> caches will get dropped and after there is nothing to drop, the
> procedure falls back to killing the process.
I guess, in the check, we will have to subtract cache size from '
memory.usage_in_bytes'.
It's still puzzling me, how instances with caching disabled for all
its block devices can accumulate such large caches on host.
Thanks all for your time,
Regards,
Brano Zarnovican
-------------- next part --------------
Jul 31 04:06:40 prod-cmp17 kernel: qemu-kvm invoked oom-killer: gfp_mask=0xd0, order=0, oom_adj=0, oom_score_adj=0
Jul 31 04:06:40 prod-cmp17 kernel: qemu-kvm cpuset=vcpu6 mems_allowed=0
Jul 31 04:06:40 prod-cmp17 kernel: Pid: 6433, comm: qemu-kvm Not tainted 2.6.32-358.6.2.el6.x86_64 #1
Jul 31 04:06:40 prod-cmp17 kernel: Call Trace:
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff810cb5f1>] ? cpuset_print_task_mems_allowed+0x91/0xb0
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff8111cdf0>] ? dump_header+0x90/0x1b0
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff811722f1>] ? task_in_mem_cgroup+0xe1/0x120
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff8111d272>] ? oom_kill_process+0x82/0x2a0
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff8111d16e>] ? select_bad_process+0x9e/0x120
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff8111d9f2>] ? mem_cgroup_out_of_memory+0x92/0xb0
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff81173534>] ? mem_cgroup_handle_oom+0x274/0x2a0
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff81170f70>] ? memcg_oom_wake_function+0x0/0xa0
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff81173b19>] ? __mem_cgroup_try_charge+0x5b9/0x5d0
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff81174e97>] ? mem_cgroup_charge_common+0x87/0xd0
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff81174f28>] ? mem_cgroup_newpage_charge+0x48/0x50
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff811429c4>] ? do_wp_page+0x1a4/0x920
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff8114393d>] ? handle_pte_fault+0x2cd/0xb50
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff81065c54>] ? enqueue_task_fair+0x64/0x100
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff810522fd>] ? check_preempt_curr+0x6d/0x90
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff8106317e>] ? try_to_wake_up+0x24e/0x3e0
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff81510472>] ? _spin_lock+0x12/0x30
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff811443fa>] ? handle_mm_fault+0x23a/0x310
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff811445fa>] ? __get_user_pages+0x12a/0x430
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff81144999>] ? get_user_pages+0x49/0x50
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff8104c307>] ? get_user_pages_fast+0x157/0x1c0
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffffa01e3343>] ? hva_to_pfn+0x33/0x1a0 [kvm]
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff8150f776>] ? down_read+0x16/0x30
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffffa01fea1b>] ? mapping_level+0x17b/0x1d0 [kvm]
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffffa02016b4>] ? tdp_page_fault+0x74/0x160 [kvm]
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffffa01f3a8a>] ? kvm_set_msr_common+0x60a/0xb60 [kvm]
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffffa01fff58>] ? kvm_mmu_page_fault+0x28/0xc0 [kvm]
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff812820de>] ? copy_user_generic_unrolled+0x8e/0xb0
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffffa011671c>] ? handle_ept_violation+0x6c/0x140 [kvm_intel]
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffffa0119ef3>] ? vmx_handle_exit+0xc3/0x280 [kvm_intel]
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffffa01f8f7d>] ? kvm_arch_vcpu_ioctl_run+0x4ad/0x10f0 [kvm]
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffffa01e1ff4>] ? kvm_vcpu_ioctl+0x434/0x580 [kvm]
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff81194fb2>] ? vfs_ioctl+0x22/0xa0
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff8119547a>] ? do_vfs_ioctl+0x3aa/0x580
Jul 31 04:06:40 prod-cmp17 kernel: [<ffffffff811956d1>] ? sys_ioctl+0x81/0xa0
Jul 31 04:06:41 prod-cmp17 kernel: [<ffffffff810dc645>] ? __audit_syscall_exit+0x265/0x290
Jul 31 04:06:41 prod-cmp17 kernel: [<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
Jul 31 04:06:41 prod-cmp17 kernel: Task in /libvirt/qemu/i-000010c5 killed as a result of limit of /libvirt/qemu/i-000010c5
Jul 31 04:06:41 prod-cmp17 kernel: memory: usage 36492800kB, limit 36492800kB, failcnt 10319290
Jul 31 04:06:41 prod-cmp17 kernel: memory+swap: usage 48727788kB, limit 9007199254740991kB, failcnt 0
Jul 31 04:06:41 prod-cmp17 kernel: Mem-Info:
Jul 31 04:06:41 prod-cmp17 kernel: Node 0 DMA per-cpu:
Jul 31 04:06:41 prod-cmp17 kernel: CPU 0: hi: 0, btch: 1 usd: 0
Jul 31 04:06:41 prod-cmp17 kernel: CPU 1: hi: 0, btch: 1 usd: 0
Jul 31 04:06:41 prod-cmp17 kernel: CPU 2: hi: 0, btch: 1 usd: 0
Jul 31 04:06:41 prod-cmp17 kernel: CPU 3: hi: 0, btch: 1 usd: 0
Jul 31 04:06:41 prod-cmp17 kernel: CPU 4: hi: 0, btch: 1 usd: 0
Jul 31 04:06:41 prod-cmp17 kernel: CPU 5: hi: 0, btch: 1 usd: 0
Jul 31 04:06:41 prod-cmp17 kernel: CPU 6: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: CPU 7: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: CPU 8: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: CPU 9: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: CPU 10: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: CPU 11: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: CPU 12: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: CPU 13: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: CPU 14: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: CPU 15: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: CPU 16: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: CPU 17: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: CPU 18: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: CPU 19: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: CPU 20: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: CPU 21: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: CPU 22: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: CPU 23: hi: 0, btch: 1 usd: 0
Jul 31 04:06:42 prod-cmp17 kernel: Node 0 DMA32 per-cpu:
Jul 31 04:06:42 prod-cmp17 kernel: CPU 0: hi: 186, btch: 31 usd: 158
Jul 31 04:06:42 prod-cmp17 kernel: CPU 1: hi: 186, btch: 31 usd: 184
Jul 31 04:06:42 prod-cmp17 kernel: CPU 2: hi: 186, btch: 31 usd: 183
Jul 31 04:06:42 prod-cmp17 kernel: CPU 3: hi: 186, btch: 31 usd: 190
Jul 31 04:06:42 prod-cmp17 kernel: CPU 4: hi: 186, btch: 31 usd: 160
Jul 31 04:06:42 prod-cmp17 kernel: CPU 5: hi: 186, btch: 31 usd: 182
Jul 31 04:06:42 prod-cmp17 kernel: CPU 6: hi: 186, btch: 31 usd: 163
Jul 31 04:06:42 prod-cmp17 kernel: CPU 7: hi: 186, btch: 31 usd: 157
Jul 31 04:06:43 prod-cmp17 kernel: CPU 8: hi: 186, btch: 31 usd: 181
Jul 31 04:06:43 prod-cmp17 kernel: CPU 9: hi: 186, btch: 31 usd: 160
Jul 31 04:06:43 prod-cmp17 kernel: CPU 10: hi: 186, btch: 31 usd: 181
Jul 31 04:06:43 prod-cmp17 kernel: CPU 11: hi: 186, btch: 31 usd: 156
Jul 31 04:06:43 prod-cmp17 kernel: CPU 12: hi: 186, btch: 31 usd: 166
Jul 31 04:06:43 prod-cmp17 kernel: CPU 13: hi: 186, btch: 31 usd: 159
Jul 31 04:06:43 prod-cmp17 kernel: CPU 14: hi: 186, btch: 31 usd: 184
Jul 31 04:06:43 prod-cmp17 kernel: CPU 15: hi: 186, btch: 31 usd: 164
Jul 31 04:06:43 prod-cmp17 kernel: CPU 16: hi: 186, btch: 31 usd: 184
Jul 31 04:06:43 prod-cmp17 kernel: CPU 17: hi: 186, btch: 31 usd: 183
Jul 31 04:06:43 prod-cmp17 kernel: CPU 18: hi: 186, btch: 31 usd: 164
Jul 31 04:06:43 prod-cmp17 kernel: CPU 19: hi: 186, btch: 31 usd: 167
Jul 31 04:06:43 prod-cmp17 kernel: CPU 20: hi: 186, btch: 31 usd: 169
Jul 31 04:06:43 prod-cmp17 kernel: CPU 21: hi: 186, btch: 31 usd: 159
Jul 31 04:06:43 prod-cmp17 kernel: CPU 22: hi: 186, btch: 31 usd: 176
Jul 31 04:06:43 prod-cmp17 kernel: CPU 23: hi: 186, btch: 31 usd: 155
Jul 31 04:06:43 prod-cmp17 kernel: Node 0 Normal per-cpu:
Jul 31 04:06:43 prod-cmp17 kernel: CPU 0: hi: 186, btch: 31 usd: 160
Jul 31 04:06:43 prod-cmp17 kernel: CPU 1: hi: 186, btch: 31 usd: 106
Jul 31 04:06:43 prod-cmp17 kernel: CPU 2: hi: 186, btch: 31 usd: 135
Jul 31 04:06:43 prod-cmp17 kernel: CPU 3: hi: 186, btch: 31 usd: 28
Jul 31 04:06:43 prod-cmp17 kernel: CPU 4: hi: 186, btch: 31 usd: 140
Jul 31 04:06:43 prod-cmp17 kernel: CPU 5: hi: 186, btch: 31 usd: 126
Jul 31 04:06:43 prod-cmp17 kernel: CPU 6: hi: 186, btch: 31 usd: 123
Jul 31 04:06:43 prod-cmp17 kernel: CPU 7: hi: 186, btch: 31 usd: 163
Jul 31 04:06:43 prod-cmp17 kernel: CPU 8: hi: 186, btch: 31 usd: 164
Jul 31 04:06:43 prod-cmp17 kernel: CPU 9: hi: 186, btch: 31 usd: 92
Jul 31 04:06:43 prod-cmp17 kernel: CPU 10: hi: 186, btch: 31 usd: 1
Jul 31 04:06:43 prod-cmp17 kernel: CPU 11: hi: 186, btch: 31 usd: 50
Jul 31 04:06:43 prod-cmp17 kernel: CPU 12: hi: 186, btch: 31 usd: 117
Jul 31 04:06:43 prod-cmp17 kernel: CPU 13: hi: 186, btch: 31 usd: 9
Jul 31 04:06:43 prod-cmp17 kernel: CPU 14: hi: 186, btch: 31 usd: 75
Jul 31 04:06:43 prod-cmp17 kernel: CPU 15: hi: 186, btch: 31 usd: 99
Jul 31 04:06:43 prod-cmp17 kernel: CPU 16: hi: 186, btch: 31 usd: 34
Jul 31 04:06:43 prod-cmp17 kernel: CPU 17: hi: 186, btch: 31 usd: 89
Jul 31 04:06:43 prod-cmp17 kernel: CPU 18: hi: 186, btch: 31 usd: 85
Jul 31 04:06:43 prod-cmp17 kernel: CPU 19: hi: 186, btch: 31 usd: 37
Jul 31 04:06:43 prod-cmp17 kernel: CPU 20: hi: 186, btch: 31 usd: 179
Jul 31 04:06:43 prod-cmp17 kernel: CPU 21: hi: 186, btch: 31 usd: 87
Jul 31 04:06:43 prod-cmp17 kernel: CPU 22: hi: 186, btch: 31 usd: 181
Jul 31 04:06:43 prod-cmp17 kernel: CPU 23: hi: 186, btch: 31 usd: 180
Jul 31 04:06:43 prod-cmp17 kernel: active_anon:15442420 inactive_anon:1708947 isolated_anon:0
Jul 31 04:06:43 prod-cmp17 kernel: active_file:9781 inactive_file:6255 isolated_file:0
Jul 31 04:06:43 prod-cmp17 kernel: unevictable:6356 dirty:3 writeback:0 unstable:0
Jul 31 04:06:43 prod-cmp17 kernel: free:6716678 slab_reclaimable:12259 slab_unreclaimable:470254
Jul 31 04:06:43 prod-cmp17 kernel: mapped:6596 shmem:84 pagetables:45604 bounce:0
Jul 31 04:06:43 prod-cmp17 kernel: Node 0 DMA free:15696kB min:8kB low:8kB high:12kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15304kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
Jul 31 04:06:43 prod-cmp17 kernel: lowmem_reserve[]: 0 2980 96910 96910
Jul 31 04:06:44 prod-cmp17 kernel: Node 0 DMA32 free:761244kB min:2076kB low:2592kB high:3112kB active_anon:697808kB inactive_anon:15212kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:3051888kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:220kB slab_unreclaimable:13408kB kernel_stack:24kB pagetables:1844kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jul 31 04:06:44 prod-cmp17 kernel: lowmem_reserve[]: 0 0 93930 93930
Jul 31 04:06:44 prod-cmp17 kernel: Node 0 Normal free:26089772kB min:65492kB low:81864kB high:98236kB active_anon:61071872kB inactive_anon:6820576kB active_file:39124kB inactive_file:25020kB unevictable:25424kB isolated(anon):0kB isolated(file):0kB present:96184320kB mlocked:9072kB dirty:12kB writeback:0kB mapped:26384kB shmem:336kB slab_reclaimable:48816kB slab_unreclaimable:1867608kB kernel_stack:7504kB pagetables:180572kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
Jul 31 04:06:44 prod-cmp17 kernel: lowmem_reserve[]: 0 0 0 0
Jul 31 04:06:44 prod-cmp17 kernel: Node 0 DMA: 0*4kB 2*8kB 2*16kB 1*32kB 2*64kB 1*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15696kB
Jul 31 04:06:44 prod-cmp17 kernel: Node 0 DMA32: 5207*4kB 2670*8kB 1677*16kB 1036*32kB 748*64kB 547*128kB 418*256kB 280*512kB 224*1024kB 24*2048kB 3*4096kB = 761244kB
Jul 31 04:06:44 prod-cmp17 kernel: Node 0 Normal: 7640*4kB 504424*8kB 200459*16kB 66357*32kB 46855*64kB 32460*128kB 17805*256kB 6615*512kB 1557*1024kB 0*2048kB 0*4096kB = 26089648kB
Jul 31 04:06:44 prod-cmp17 kernel: 1721282 total pagecache pages
Jul 31 04:06:44 prod-cmp17 kernel: 1704554 pages in swap cache
Jul 31 04:06:44 prod-cmp17 kernel: Swap cache stats: add 43398730, delete 41694176, find 10407108/13258331
Jul 31 04:06:44 prod-cmp17 kernel: Free swap = 0kB
Jul 31 04:06:44 prod-cmp17 kernel: Total swap = 14678008kB
Jul 31 04:06:45 prod-cmp17 kernel: 25165808 pages RAM
Jul 31 04:06:45 prod-cmp17 kernel: 404624 pages reserved
Jul 31 04:06:45 prod-cmp17 kernel: 2912097 pages shared
Jul 31 04:06:45 prod-cmp17 kernel: 16687393 pages non-shared
Jul 31 04:06:45 prod-cmp17 kernel: [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name
Jul 31 04:06:45 prod-cmp17 kernel: [ 6422] 107 6422 6830369 4723771 5 0 0 qemu-kvm
Jul 31 04:06:45 prod-cmp17 kernel: Memory cgroup out of memory: Kill process 6422 (qemu-kvm) score 666 or sacrifice child
Jul 31 04:06:45 prod-cmp17 kernel: Killed process 6422, UID 107, (qemu-kvm) total-vm:27321476kB, anon-rss:18890492kB, file-rss:4592kB
Jul 31 04:06:45 prod-cmp17 kernel: Kill process 6424 (vhost-6422) sharing same memory
Jul 31 04:06:48 prod-cmp17 tgtd: conn_close(101) connection closed, 0xdc5808 1
-------------- next part --------------
[root at prod-cmp17 ~]# cgget libvirt/qemu/i-000010c5
libvirt/qemu/i-000010c5:
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 0
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 0
cpuset.cpus: 0-23
cpu.rt_period_us: 1000000
cpu.rt_runtime_us: 0
cpu.stat: nr_periods 0
nr_throttled 0
throttled_time 0
cpu.cfs_period_us: 100000
cpu.cfs_quota_us: -1
cpu.shares: 1024
cpuacct.stat: user 91373
system 8156091
cpuacct.usage_percpu: 41508567422657 43522389859855 41007220117797 38171498052525 28157228685390 28451779832726 19457777703452 19481644594782 13827320158119 13638887599693 15905338936204 15921013620247 22610645227540 23256431187951 19124192319370 17617504293692 14831317576657 14963573355789 10429663169917 10396672139884 7580644354161 7541682767919 10060737621193 9969093934859
cpuacct.usage: 487432824532379
memory.memsw.failcnt: 0
memory.memsw.limit_in_bytes: 9223372036854775807
memory.memsw.max_usage_in_bytes: 24061014016
memory.memsw.usage_in_bytes: 11714752512
memory.oom_control: oom_kill_disable 0
under_oom 0
memory.move_charge_at_immigrate: 0
memory.swappiness: 10
memory.use_hierarchy: 1
memory.force_empty:
memory.stat: cache 2957312
rss 11710672896
mapped_file 24576
pgpgin 154682126
pgpgout 153134090
swap 0
inactive_anon 49152
active_anon 11710623744
inactive_file 2748416
active_file 188416
unevictable 0
hierarchical_memory_limit 37368627200
hierarchical_memsw_limit 9223372036854775807
total_cache 2957312
total_rss 11710672896
total_mapped_file 24576
total_pgpgin 154682126
total_pgpgout 153134090
total_swap 0
total_inactive_anon 49152
total_active_anon 11710623744
total_inactive_file 2748416
total_active_file 188416
total_unevictable 0
memory.failcnt: 0
memory.soft_limit_in_bytes: 9223372036854775807
memory.limit_in_bytes: 37368627200
memory.max_usage_in_bytes: 24061014016
memory.usage_in_bytes: 11714752512
devices.list: b 253:9 rw
b 253:8 rw
c 136:* rw
c 1:3 rw
c 1:7 rw
c 1:5 rw
c 1:8 rw
c 1:9 rw
c 5:2 rw
c 10:232 rw
c 254:0 rw
c 10:228 rw
devices.deny:
devices.allow:
freezer.state: THAWED
blkio.throttle.io_serviced: 253:8 Read 13566965
253:8 Write 11272645
253:8 Sync 0
253:8 Async 24839610
253:8 Total 24839610
253:7 Read 13794024
253:7 Write 14681932
253:7 Sync 0
253:7 Async 28475956
253:7 Total 28475956
253:9 Read 227059
253:9 Write 3409287
253:9 Sync 0
253:9 Async 3636346
253:9 Total 3636346
253:5 Read 16969
253:5 Write 122446
253:5 Sync 0
253:5 Async 139415
253:5 Total 139415
253:6 Read 16969
253:6 Write 122446
253:6 Sync 0
253:6 Async 139415
253:6 Total 139415
253:1 Read 108
253:1 Write 0
253:1 Sync 0
253:1 Async 108
253:1 Total 108
Total 57230850
blkio.throttle.io_service_bytes: 253:8 Read 211119605760
253:8 Write 1614540419072
253:8 Sync 0
253:8 Async 1825660024832
253:8 Total 1825660024832
253:7 Read 224238285824
253:7 Write 1774989701120
253:7 Sync 0
253:7 Async 1999227986944
253:7 Total 1999227986944
253:9 Read 13118680064
253:9 Write 160449282048
253:9 Sync 0
253:9 Async 173567962112
253:9 Total 173567962112
253:5 Read 1635995648
253:5 Write 2157363200
253:5 Sync 0
253:5 Async 3793358848
253:5 Total 3793358848
253:6 Read 1635995648
253:6 Write 2157363200
253:6 Sync 0
253:6 Async 3793358848
253:6 Total 3793358848
253:1 Read 2871296
253:1 Write 0
253:1 Sync 0
253:1 Async 2871296
253:1 Total 2871296
Total 4006045562880
blkio.throttle.write_iops_device:
blkio.throttle.read_iops_device:
blkio.throttle.write_bps_device:
blkio.throttle.read_bps_device:
blkio.reset_stats:
blkio.io_queued: Total 0
blkio.io_merged: Total 0
blkio.io_wait_time: Total 0
blkio.io_service_time: Total 0
blkio.io_serviced: Total 0
blkio.io_service_bytes: Total 0
blkio.sectors:
blkio.time:
blkio.weight: 500
blkio.weight_device:
More information about the libvirt-users
mailing list