[vfio-users] Brutal DPC Latency - how is yours? check it please and report back

Quentin Deldycke quentindeldycke at gmail.com
Sat Jan 9 15:14:55 UTC 2016


I use virsh:

===SNIP===
  <vcpu placement='static'>3</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='2'/>
    <vcpupin vcpu='2' cpuset='3'/>
    <emulatorpin cpuset='6-7'/>
  </cputune>
===SNAP===

I have a prepare script running:

===SNIP===
sudo mkdir /cpuset
sudo mount -t cpuset none /cpuset/
cd /cpuset
echo 0 | sudo tee -a cpuset.cpu_exclusive
echo 0 | sudo tee -a cpuset.mem_exclusive

sudo mkdir sys
echo 'Building shield for core system... threads 0 and 4, and we place all
runnning tasks there'
/bin/echo 0,4 | sudo tee -a sys/cpuset.cpus
/bin/echo 0 | sudo tee -a sys/cpuset.mems
/bin/echo 0 | sudo tee -a sys/cpuset.cpu_exclusive
/bin/echo 0 | sudo tee -a sys/cpuset.mem_exclusive
for T in `cat tasks`; do sudo bash -c "/bin/echo $T > sys/tasks">/dev/null
2>&1 ; done
cd -
===SNAP===

Note that i use this command line for the kernel
nohz_full=1,2,3,4,5,6,7 rcu_nocbs=1,2,3,4,5,6,7 default_hugepagesz=1G
hugepagesz=1G hugepages=12


--
Deldycke Quentin


On 9 January 2016 at 15:40, rndbit <rndbit at sysret.net> wrote:

> Mind posting actual commands how you achieved this?
>
> All im doing now is this:
>
> cset set -c 0-3 system
> cset proc -m -f root -t system -k
>
>   <vcpu placement='static'>4</vcpu>
>   <cputune>
>     <vcpupin vcpu='0' cpuset='4'/>
>     <vcpupin vcpu='1' cpuset='5'/>
>     <vcpupin vcpu='2' cpuset='6'/>
>     <vcpupin vcpu='3' cpuset='7'/>
>     <emulatorpin cpuset='0-3'/>
>   </cputune>
>
> Basically this puts most of threads to 0-3 cores including emulator
> threads. Some threads cant be moved though so they remain on 4-7 cores. VM
> is given 4-7 cores. It works better but there is still much to be desired.
>
>
>
> On 2016.01.09 15:59, Quentin Deldycke wrote:
>
> Hello,
>
> Using cpuset, i was using the vm with:
>
> Core 0: threads 0 & 4: linux + emulator pin
> Core 1,2,3: threads 1,2,3,5,6,7: windows
>
> I tested with:
> Core 0: threads 0 & 4: linux
> Core 1,2,3: threads 1,2,3: windows
> Core 1,2,3: threads 5,6,7: emulator
>
> The difference between both is huge (DPC latency is mush more stable):
> Performance on single core went up to 50% (cinebench ratio by core from
> 100 to 150 points)
> Performance on gpu went up to 20% (cinebench from 80fps to 100+)
> Performance on "heroes of the storm" went from 20~30 fps to stable 60 (and
> much time more than 100)
>
> (performance of Unigine Heaven went from 2700 points to 3100 points)
>
> The only sad thing is that i have the 3 idle threads which are barely
> used... Is there any way to put them back to windows?
>
> --
> Deldycke Quentin
>
>
> On 29 December 2015 at 17:38, Michael Bauer <michael at m-bauer.org> wrote:
>
>> I noticed that attaching a DVD-Drive from the host leads to HUGE delays.
>> I had attached my /dev/sr0 to the guest and even without a DVD in the drive
>> this was causing huge lag about once per second.
>>
>> Best regards
>> Michael
>>
>>
>> Am 28.12.2015 um 19:30 schrieb rndbit:
>>
>> 4000μs-16000μs here, its terrible.
>> Tried whats said on
>> https://lime-technology.com/forum/index.php?topic=43126.15
>> Its a bit better with this:
>>
>>   <vcpu placement='static'>4</vcpu>
>>   <cputune>
>>     <vcpupin vcpu='0' cpuset='4'/>
>>     <vcpupin vcpu='1' cpuset='5'/>
>>     <vcpupin vcpu='2' cpuset='6'/>
>>     <vcpupin vcpu='3' cpuset='7'/>
>>     <emulatorpin cpuset='0-3'/>
>>   </cputune>
>>
>> I tried *isolcpus* but it did not yield visible benefits. *ndis.sys* is
>> big offender here but i dont really understand why. Removing network
>> interface from VM makes *usbport.sys* take over as biggest offender. All
>> this happens with *performance* governor of all cpu cores:
>>
>> echo performance | tee
>> /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor >/dev/null
>>
>> Cores remain clocked at 4k mhz. I dont know what else i could try. Does
>> anyone have any ideas..?
>>
>> On 2015.10.29 08:03, Eddie Yen wrote:
>>
>> I tested again with VM reboot, I found that this time is about 1000~1500
>> μs.
>> Also I found that it easily get high while hard drive is loading, but
>> only few times.
>>
>> Which specs you're using? Maybe it depends on CPU or patches.
>>
>> 2015-10-29 13:44 GMT+08:00 Blank Field < <ihatethisfield at gmail.com>
>> ihatethisfield at gmail.com>:
>>
>>> If i understand it right, this software has a fixed latency error of 1
>>> ms(1000us) in windows 8-10 due to different kernel timer implementation. So
>>> i guess your latency is very good.
>>> On Oct 29, 2015 8:40 AM, "Eddie Yen" < <missile0407 at gmail.com>
>>> missile0407 at gmail.com> wrote:
>>>
>>>> Thanks for information! And sorry I don'r read carefully at beginning
>>>> message.
>>>>
>>>> For my result, I got about 1000μs below and only few times got 1000μs
>>>> above when idling.
>>>>
>>>> I'm using 4820K and used 4 threads to VM, also  I set these 4 threads
>>>> as 4 cores in VM settings.
>>>> The OS is Windows 10.
>>>>
>>>> 2015-10-29 13:21 GMT+08:00 Blank Field < <ihatethisfield at gmail.com>
>>>> ihatethisfield at gmail.com>:
>>>>
>>>>> I think they're using this:
>>>>> www.thesycon.de/deu/latency_check.shtml
>>>>> On Oct 29, 2015 6:11 AM, "Eddie Yen" < <missile0407 at gmail.com>
>>>>> missile0407 at gmail.com> wrote:
>>>>>
>>>>>> Sorry, but how to check DPC Latency?
>>>>>>
>>>>>> 2015-10-29 10:08 GMT+08:00 Nick Sukharev < <nicksukharev at gmail.com>
>>>>>> nicksukharev at gmail.com>:
>>>>>>
>>>>>>> I just checked on W7 and I get 3000μs-4000μs one one of the guests
>>>>>>> when 3 guests are running.
>>>>>>>
>>>>>>> On Wed, Oct 28, 2015 at 4:52 AM, Sergey Vlasov < <sergey at vlasov.me>
>>>>>>> sergey at vlasov.me> wrote:
>>>>>>>
>>>>>>>> On 27 October 2015 at 18:38, LordZiru < <lordziru at gmail.com>
>>>>>>>> lordziru at gmail.com> wrote:
>>>>>>>>
>>>>>>>>> I have brutal DPC Latency on qemu, no matter if using pci-assign
>>>>>>>>> or vfio-pci or without any passthrought,
>>>>>>>>>
>>>>>>>>> my DPC Latency is like:
>>>>>>>>> 10000,500,8000,6000,800,300,12000,9000,700,2000,9000
>>>>>>>>> and on native windows 7 is like:
>>>>>>>>> 20,30,20,50,20,30,20,20,30
>>>>>>>>>
>>>>>>>>
>>>>>>>> In Windows 10 guest I constantly have red bars around 3000μs
>>>>>>>> (microseconds), spiking sometimes up to 10000μs.
>>>>>>>>
>>>>>>>>
>>>>>>>>> I don't know how to fix it.
>>>>>>>>> this matter for me because i are using USB Sound Card for my VMs,
>>>>>>>>> and i get sound drop-outs every 0-4 secounds
>>>>>>>>>
>>>>>>>>>
>>>>>>>> That bugs me a lot too. I also use an external USB card and my DAW
>>>>>>>> periodically drops out :(
>>>>>>>>
>>>>>>>> I haven't tried CPU pinning yet though. And perhaps I should try
>>>>>>>> Windows 7.
>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> vfio-users mailing list
>>>>>>>> <vfio-users at redhat.com>vfio-users at redhat.com
>>>>>>>> <https://www.redhat.com/mailman/listinfo/vfio-users>
>>>>>>>> https://www.redhat.com/mailman/listinfo/vfio-users
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> vfio-users mailing list
>>>>>>> <vfio-users at redhat.com>vfio-users at redhat.com
>>>>>>> <https://www.redhat.com/mailman/listinfo/vfio-users>
>>>>>>> https://www.redhat.com/mailman/listinfo/vfio-users
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> vfio-users mailing list
>>>>>> <vfio-users at redhat.com>vfio-users at redhat.com
>>>>>> <https://www.redhat.com/mailman/listinfo/vfio-users>
>>>>>> https://www.redhat.com/mailman/listinfo/vfio-users
>>>>>>
>>>>>>
>>>>
>>
>>
>> _______________________________________________
>> vfio-users mailing listvfio-users at redhat.comhttps://www.redhat.com/mailman/listinfo/vfio-users
>>
>>
>>
>>
>> _______________________________________________
>> vfio-users mailing listvfio-users at redhat.comhttps://www.redhat.com/mailman/listinfo/vfio-users
>>
>>
>>
>> _______________________________________________
>> vfio-users mailing list
>> vfio-users at redhat.com
>> https://www.redhat.com/mailman/listinfo/vfio-users
>>
>>
>
>
> _______________________________________________
> vfio-users mailing listvfio-users at redhat.comhttps://www.redhat.com/mailman/listinfo/vfio-users
>
>
>
> _______________________________________________
> vfio-users mailing list
> vfio-users at redhat.com
> https://www.redhat.com/mailman/listinfo/vfio-users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20160109/74507876/attachment.htm>


More information about the vfio-users mailing list