[vfio-users] Why does the audio from VM crackle?
Milos Kaurin
milos.kaurin at gmail.com
Thu Jun 9 17:56:50 UTC 2016
model name : Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz
On Thu, Jun 9, 2016 at 6:56 PM, Spam House <kthxplzdie at gmail.com> wrote:
> Milos -
>
> What CPU are you using?
>
> On Thu, Jun 9, 2016 at 11:43 AM, Milos Kaurin <milos.kaurin at gmail.com>
> wrote:
>
>> Yes, this keeps coming up now and again on this list.
>>
>> My solution was:
>>
>> 1. Buy a dedicated PCIE sound card
>> 2. Pass it through to the guest
>> 3. Route audio: mobo soundcard > guest soundcard
>>
>> In essence, the always-on guest would handle audio, but I would still
>> have crackles (rarely), especially from the web browsers (host & guest).
>> Games not so much.
>>
>> To fix crackles, I did the following:
>> 1. In the kernel cmdline: Use hugepages, 16x1GB for the guest (16 for the
>> host, 32 total)
>> 2. In the kernel cmdline: Use cpuisol (6 cores for the gues, 2 for the
>> host)
>> 3. In the kernel cmdline: Use nohz_full and rcu_nocbs for the guest cores
>> 4. In the virt XML: Use cputune, hugepages, cpu mode=host-passthrough
>> with the correct topology
>> 5. In the virt XML: iothreadpin and emulatorpin for the 2 host CPUs
>> 6. In the virt XML: Use virtio wherever possible (net, block devices)
>>
>> I think I covered everything. If anyone else has any other tips, please
>> let me know.
>>
>>
>>
>> Appenix
>>
>>
>> My /etc/default/grub:
>>
>> GRUB_CMDLINE_LINUX="rhgb quiet intel_iommu=on iommu=pt kvm.ignore_msrs=1
>> rd.driver.pre=vfio-pci hugepagesz=1G default_hugepagesz=1G hugepages=16
>> nohz_full=1,2,3,5,6,7 rcu_nocbs=1,2,3,5,6,7 isolcpus=1,2,3,5,6,7"
>>
>>
>>
>> My XML:
>>
>> <domain type='kvm'>
>> <name>Win10</name>
>> <uuid>4bb72e5b-c886-425f-8280-b69755ebf054</uuid>
>> <memory unit='KiB'>16777216</memory>
>> <currentMemory unit='KiB'>16777216</currentMemory>
>> <memoryBacking>
>> <hugepages/>
>> </memoryBacking>
>> <vcpu placement='static'>6</vcpu>
>> <iothreads>2</iothreads>
>> <cputune>
>> <vcpupin vcpu='0' cpuset='1'/>
>> <vcpupin vcpu='1' cpuset='5'/>
>> <vcpupin vcpu='2' cpuset='2'/>
>> <vcpupin vcpu='3' cpuset='6'/>
>> <vcpupin vcpu='4' cpuset='3'/>
>> <vcpupin vcpu='5' cpuset='7'/>
>> <emulatorpin cpuset='0,4'/>
>> <iothreadpin iothread='1' cpuset='0'/>
>> <iothreadpin iothread='2' cpuset='4'/>
>> </cputune>
>> <os>
>> <type arch='x86_64' machine='pc-i440fx-2.4'>hvm</type>
>> <loader readonly='yes'
>> type='pflash'>/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
>> <nvram>/var/lib/libvirt/qemu/nvram/Win10_VARS.fd</nvram>
>> </os>
>> <features>
>> <acpi/>
>> <apic/>
>> <kvm>
>> <hidden state='on'/>
>> </kvm>
>> <vmport state='off'/>
>> </features>
>> <cpu mode='host-passthrough'>
>> <topology sockets='1' cores='3' threads='2'/>
>> </cpu>
>> <clock offset='localtime'>
>> <timer name='rtc' tickpolicy='catchup'/>
>> <timer name='pit' tickpolicy='delay'/>
>> <timer name='hpet' present='no'/>
>> </clock>
>> <on_poweroff>destroy</on_poweroff>
>> <on_reboot>restart</on_reboot>
>> <on_crash>restart</on_crash>
>> <pm>
>> <suspend-to-mem enabled='no'/>
>> <suspend-to-disk enabled='no'/>
>> </pm>
>> <devices>
>> <emulator>/usr/bin/qemu-kvm</emulator>
>> <disk type='block' device='disk'>
>> <driver name='qemu' type='raw' cache='none' io='native'/>
>> <source dev='/dev/sdb'/>
>> <target dev='vda' bus='virtio'/>
>> <boot order='2'/>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x09'
>> function='0x0'/>
>> </disk>
>> <controller type='pci' index='0' model='pci-root'/>
>> <controller type='scsi' index='0' model='virtio-scsi'>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
>> function='0x0'/>
>> </controller>
>> <controller type='usb' index='0' model='none'/>
>> <interface type='bridge'>
>> <mac address='52:54:00:3e:c8:03'/>
>> <source bridge='br0'/>
>> <model type='virtio'/>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
>> function='0x0'/>
>> </interface>
>> <hostdev mode='subsystem' type='pci' managed='yes'>
>> <source>
>> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
>> </source>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a'
>> function='0x0'/>
>> </hostdev>
>> <hostdev mode='subsystem' type='pci' managed='yes'>
>> <source>
>> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
>> </source>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x0b'
>> function='0x0'/>
>> </hostdev>
>> <hostdev mode='subsystem' type='pci' managed='yes'>
>> <source>
>> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
>> </source>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x02'
>> function='0x0'/>
>> </hostdev>
>> <hostdev mode='subsystem' type='pci' managed='yes'>
>> <source>
>> <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
>> </source>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
>> function='0x0'/>
>> </hostdev>
>> <memballoon model='none'/>
>> </devices>
>> </domain>
>>
>>
>>
>>
>>
>> _______________________________________________
>> vfio-users mailing list
>> vfio-users at redhat.com
>> https://www.redhat.com/mailman/listinfo/vfio-users
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20160609/d5f7851a/attachment.htm>
More information about the vfio-users
mailing list