[vfio-users] What to put in virt-manager so GeForce Experience will "Optimize" my CPU?

Hristo Iliev hristo at hiliev.eu
Tue Jan 26 23:01:46 UTC 2016


On Tue, 26 Jan 2016 11:26:06 +0700 Okky Hendriansyah <okky at nostratech.com> wrote:

> > On Jan 26, 2016, at 11:09, Will Marler <will at wmarler.com> wrote:
> > 
> > Found this on the Arch wiki, but setting the kvm option didn't change
> > anything:
> > https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#Make_Nvidia.27s_GeForce_Experience_work  
> 
> Hi Will,
> 
> That ignore MSRS config is just to ignore the guest's request on CPU
> frequency scaling as it is not relevant. Some games need this config and I'm
> pretty sure it's safe. I had mine ignored also.
> 

Hi,

Some programs are not written to handle gracefully the #GPs triggered by
accessing unimplemented MSRs. CPU-Z for one doesn't show most of the
information related to the CPU if this option is not set even when the CPU
capabilities are set to host-passthrough.

As a matter of fact, that section of the Arch Wiki was written by me after
some experiments with NVIDIA GeForce Experience and combing through the system
logs. Setting the ignore_msrs option to "1" was back then (September 30th,
2015) a necessary although probably not sufficient condition to make the
program work. The failing rdmsr's were for MSR 0x606 (MSR_RAPL_POWER_UNIT) and
for MSR 0x641 (MSR_PP1_ENERGY_STATUS).

Strange enough, I just started NGE and now I don't see any failing rdmsr's
in the kernel log (4.1.15-2-vfio-lts). Either the kvm module has acquired new
capabilities or the latest GeForce Experience / driver no longer queries the
CPU for its energy status. Perhaps the wiki section could use some editing.

> > And I'm not sure what the unintended consequences might be that the Wiki
> > warns about, so I'll be setting that back. Pretty sure I can optimize
> > manually.  
> 

I still haven't noticed any, but left the warning there nonetheless.

> The NVIDIA GeForce Experience optimization is basically auto set the most
> descent performance/picture quality based on presets available on NVIDIA's
> database and your hardware combination. It doesn't do any optimization magic
> actually.
> 

I doubt that the program functions properly in virtual environments where the
CPU has a different properties than the "official" one based on the "official"
specs of the host CPU. In my case, for example, Windows enjoys a 3-core
hyperthreaded i7-5820K. Nvidia gathers performance information from the
systems of their users and applies some machine learning magic to derive
optimal configurations. If they base the prediction on the CPU model and
e.g. core frequency alone (i.e. assuming 6 HT'd cores), it might end up being
wrong for my VM config.

In that respect, using GeForce Experience in a VM probably results in outliers
in their data :) But I really like the option to push a button and get a
reasonable set of initial settings that I could fine tune later.

Regards,
Hristo

> > And yea ... I did notice that shutting down the VM and powering it back on
> > was not the same as a reboot, according to the Windows 10 system. I had
> > changed the hostname, powered off, powered on ... Windows said "the name
> > will be changed when the computer reboots." O.o  
> 
> I noticed that reboots are often slower than power off and power on again in
> Windows 10. Also I had this CPU recognize issue also in both my physical and
> virtual machine. On physical machiine I had to do a couple of reboots/cold
> power off to force Windows 10 to rescan the underlying CPU, other
> alternative is to use DDU (Display Driver Uninstaller) to completely
> uninstall the graphics and reinstall it again fresh, which I did that on my
> guest.
> 
> Best regards,
> Okky Hendriansyah




More information about the vfio-users mailing list