Personnaly i use the sound of the graphic card directly, not setting anything special.For my usb headset i bought a usb controller card which i passthrough to the vm, this way i can use windows driver, working just fine with everything :DI cannot use isolated cpu (i won't) since i also use my desktop for linux things, like programming for example, and i want to be able to do things with my host when the guest is shutdown, otherwise i would have go with a native install ;)I wanted to give a try to cset, to temporarly reserved a core for the host and move all his process there but that didn't workout well with libvirt, so...What is your use of pulseaudio server?2016-04-15 16:34 GMT+02:00 Ivan Volosyuk <ivan volosyuk gmail com>:Optimizations is my favorite topic.I use qcow on bcache (HDD with SSD cache).- In windows I disabled: disk indexing, boot optimizations, antivirus periodic scans (they all bad for bcache). For pure SSD this might be ok.Other windows optimizations would be using MSI for interrupt handling, see Alex blog.I use virtio SCSI device for my root device, not sure if this the best configuration for qcow2. For raw partition there should be completely different flags.STORAGE+=" -drive file=$HDD,id=disk,if=none,format=qcow2,cache=writeback,aio=threads"STORAGE+=" -device scsi-hd,bus=scsi0.0,drive=disk"Even with MSI I still have issues with audio crackles, this is the last optimizations I tried to reduce them. Not sure if this counts as optimization:SND=" -soundhw ac97 -rtc base=utc,driftfix=slew -no-hpet -global kvm-pit.lost_tick_policy=discard"SND_DRIVER_OPTS="QEMU_AUDIO_DRV=pa QEMU_PA_SAMPLES=1024"Use isolated CPUs. This makes this unavailable to the rest of the system, but we are talking about gaming machine, right? This should unload the CPUs from normal kernel and userspace tasks.kernel option: isolcpus=4-7Use realtime priority on pulseaudio server.TODO: make sure it uses shared memory as communication channel with qemu.--Regards,Ivan_______________________________________________2016-04-15 6:13 GMT+02:00 Okky Hendriansyah <okky htf gmail com>:I think Alex had mentioned about this, and if I recall correctly using pc-i440fx is preferrable since it is simpler and going to pc-q35 won't have any performance benefit. Currently I only use pc-q35 specifically just for my Hackintosh guest. I never done any benchmark between these two types recently though, so the result might change.
I just read alex's mail below yours, indeed you are right nothing changes, so much fuss for nothing :/Thanks for the clarification alex btw.According to one of the reddit users at /r/vfio , avoiding to use hv_vapic and hv_syncic in newer Intel CPUs starting Ivy Bridge-E onwards which has built-in Intel APICv will generally improve performance by reducing VM exits. Currently I'm using these options:
-cpu host,kvm=off,hv_time,hv_relaxed,hv_spinlocks=0x1fff,hv_vpindex,hv_reset,hv_runtime,hv_crash,hv_vendor_id=freyjaI read that post too, though i don't have enough knowledge about virtualization to really understand what this guy is talking about, i bumped into this : https://software.intel.com/en-us/blogs/2009/06/25/virtualization-and-performance-understanding-vm-exits but no.Will try your additionnal options while waiting to have the latest libvirt version.Those two kernel configurations (1000 MHz and Voluntary) made my stuttery Garret to a butter smooth Garret ;). Other plus point is that ZFS, which I use extensively for the OS guest images prefers Voluntary also. I personnaly have my vm image in a qcwo2 container on a ssd, it would be nice to get someone with I/O knowledge, since there are tons of operations, optimize the storage part would be great, especially for people running on ssd, i found a thread on vfio reddit :Speaking of drives, it seems from what i read that it is possible for qemu/kvm to read a native (non virtualize) install of windows from a passthrough drive.If so is there something special to do? Might go to the hassle of reinstalling my all windows system by i prefer to be sure before touching anything (though i might just backup the image, boot my vm from it and clone windows, much easier than reinstalling).That would be glorious for comparative benchmarks since one will need to have two installs on the same type of drive to have the same configuration otherwise, 3dmark only runs on windows and heaven benchmark only load the gpu only so it is kind of useless in our case imo.Seem like a container is not a great idea after and that it would be better to have a full disk reserved for the vm, might be worth formating, not sure about that.I think MADVISE hugepages doesn't directly hit the guest performance. Though I find that using this option could help eliminating uneeded hugepage requests on applications that do not gain benefit from hugepages. So this option is more to have an efficient memory usage on the host, rather than guest performance since the guest is already using a dedicated hugepages (hugetlbfs).I was under the impression that classic hugepages could reserve memory to themselved thus doing rubbish with hugetlbs.You mean that by mounting hugepages the memory is hide from the host?Don't forget to still enable Windows paging if your guest memory is below the requirement. I've had low memory warning on Witcher 3 (I set the guest memory to 8 GB, and it is still has 50%+ free memory) before I reenabled back the Windows paging on C again. The other alternative is to increase the guest memory. When I set it to 16 GB without Windows paging, Witcher 3 didn't complain anymore.Windows paging? What is this?I attributed 8go of ram to the guest, that should be enough, i'm closely monitoring ressources comsuption with rivatuner server and i never get beyong 6go even when benchmarking.Since we are using virtio drivers from redhat, i wonder if updating them frequently (i don't know if there are frequents updates but still) might result in better performance.Speaking of which, i one breaks things while trying to update the drivers, i assume adding bootmenu option on libvirt allows to boot windows in safe mode right?
vfio-users mailing list
vfio-users redhat com