[vfio-users] Questions for all who have gotten this to work

Knut Omang knuto at ifi.uio.no
Sat Sep 5 11:10:16 UTC 2015


On Thu, 2015-09-03 at 09:54 -0400, ALG Bass wrote:
> For all who have consistently gotten VGA passthrough in KVM and
> regularly game in the Windows VM,
> 
> What distro do you use? 
Fedora (started on f18, now at f22 via fedora_upgrade)
> What kernel are you running?
Currently 4.0.4+
>  Did you have to re-compile it?
Yes, have rebased these patches:





i915: Add module option to support VGA arbiter on HD devices





pci: Enable overrides for missing ACS capabilities
> What CPU and GPUs do you use?
I have two systems running the same software stack:
1) Gigabyte Z77X-UD5H motherboard
   Intel Core i7 3770 (Ivy bridge
w/integrated graphics)
   GPUs:
   * Integrated graphics used for host
access.
   * GPU1 Bonaire XTX [Radeon R7 260X]




   * Cape Verde PRO [Radeon HD 7750/8740 / R7 250E]





2) Gigabyte Z97X-SOC Force motherboard
   Intel Core i7 4790K (Haswell w/integrated graphics)
   * Integrated graphics used for host access.





   * GPU1: Bonaire XTX [Radeon R7 260X]
   * GPU2: Oland XT [Radeon HD 8670 / R5 340X / R7 250/350X]





Both systems have 32GB RAM.

I have two VMs on each host (I have tried with 3 hosts and another older GPU in each system,and that seems to work with no additional issues, the main limiting factor is the
number of screens and physical space in the basement for enough seats.
System #2 actually has 4 16xPCIe slots so if demand becomes very high I suppose I
could theoretically have 4 seats + host, but then memory would become an issue, at least, due
to the requirement for pinning all the guest memory. 32GB is the max supported by the motherboard.
> What online tutorial did you use?
A combination of qemu-devel communication, Alex's blog, the infamous
Arch Linux thread,
and some face-to-face discussions with Alex at KVM forum, and the code
itself..
> How long have you had it going in a stable fashion?
For almost 2 years. 
It has evolved from a single host (#1) with two R7 250 for the most
demanding users, 
and a 3rd guest seat (with an inexpensive HD 6450) which was only in
use occasionally.
That system quickly became a success with the younger generation (you
know, friends coming by etc..) 
(At that time Minecraft  was the most important game in use, which also
allowed the host 
console to be used for an additional friend) . 
The problem then was that it became impossible for me to get 
any time slots for maintenance/upgrades/own experiments..
So I bought system #2, got it running with a newer kernel and fedora, 
and moved the users over to be able to upgrade system #1. 
Unfortunately this almost coincided with a breakdown of the last
desktop
system we had running Windows natively, so I suddently had an extra
customer...
And of course, performance for some of the CPU intensive games (Europa
Universalis) 
were better on the Haswell so one of my customers didn't want to move
back..
So this resulted in the 2x2 setup which was running impressively stable
for 
months at a time using a 3.16 kernel with similar patches.
For a long time, there were only one game they wanted to play that did
not run at all on the platform, Sims 4, 
now even that works as well as
on bare metal.

More recently my users had acquired some new games, their friends had
started to get newer computers,
and they started complaining about those games running faster on
friend's computers,
in particular Witcher 3, and also some high graphic requirement
Minecraft mods, and the prospects of GTA 5,
so motivation for getting the R7 260X  (which were dead capital in a
drawer during this period..) 
running became high enough with my customers to accept the necessary
downtime to get that going 
after Alex had mitigated the reset issues with the 260X.
That has been a success - as Alex has noted in his blog, it is slightly
less stable than the setup with 250's were;
Since the upgrade of system #2 with the R7 260X, and a following
upgrade of the second host with 
another R7 260X hosts in June, these two hosts have been running 24x7
and we have had to reboot 
each of the hosts one time due to inability to reset the R7 260X after
windows crashes, but my users 
are generally happy with it (at least from what they tell me :-)) 
Interestingly, during the debugging of performance of the graphic cards
we played a lot with different settings and the 
250s I have are slightly different (some of the problem of hosting such
a solution is that each time something is wrong,
I have the doubtful challenge of trying to convince the affected user t
hat it is most likely not an issue with GPU passthrough ;-)  )
One of these cards has GDDR5 and 1GB memory, another has DDR3 and 2GB
memory, 
they were the same price range - subtle differences - some
games/settings worked better with the GDDR5 card
(better memory latency, I suppose) and some with the 2GB version. Both
the 260Xs are 2GB, GDDR5.
Generally I prefer to avoid the very latest generation, which is
usually at a higher cost/benefit ratio, and rather
use the saved capital to upgrade more often..
I wanted to try out the new OVMF stuff but that would (again) require
downtime and unhappy users, 
and I don't think I can justify to buy a 3rd system - but who knows
that future demands will bring :-)
I also have on my todo list to experiment with the CPU affinity
settings, at the moment I use 
just a plain dual core, 2 threads per core and 8GB ram setup for each
VM. There has been occasional 
outbreaks if I compiled a Linux kernel on the host with -j8 or Linux
was up to something else and sound stuttered a bit ;-)
but other than that no major issues. Since I use qcow2 with
snapshots/backing files for backing up 
(have turned off the Windows snapshotting) and an ordinary 2TB disk,
cold start can be relatively slow, but as
I keep the systems running, warm restart is very fast, and that's the
more important. 
A big thanks to everyone who has contributed to this, of course Alex in
particular!
And lets keep up the good exchange as new challenges arises!
Thanks,
Knut
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20150905/5406a576/attachment.htm>


More information about the vfio-users mailing list