[vfio-users] Dynamically allocating hugepages?

Jesse Kennedy freebullets0 at gmail.com
Sun Jul 10 06:03:06 UTC 2016


I'm also interested in this. Running your 3 commands only gave me 3650
hugepages on a 32 GB system. I wonder if there is a way to have qemu use
available hugepages first and then fall back on normal memory once
depleted.

On Wed, Jul 6, 2016 at 2:48 PM, Thomas Lindroth <thomas.lindroth at gmail.com>
wrote:

> Hugetlbfs requires reservation of huge pages before files can be put on
> it. The easiest way of doing that is adding a hugepages argument to the
> kernel but that permanently reserves the pages and I don't want to waste
> 8G of ram all the time. Another way is to dynamically allocate them by
> echoing 4096 into /proc/sys/vm/nr_hugepages but if the computer has been
> running for more than an hour I'll be lucky to get 20 pages like that.
>
> The physical ram is too fragmented for dynamic allocation of huge pages
> but there is a workaround. The quickest way to defragment a hard drive
> is to delete all files and the fastest way to defragment ram is do drop
> caches. By running echo 3 > /proc/sys/vm/drop_caches before echo 4096 >
> /proc/sys/vm/nr_hugepages the allocation is much more likely to succeed
> but not guaranteed. Application memory could still be too fragmented.
> For that I would echo 1 > /proc/sys/vm/compact_memory which should
> compact all free space into continuous areas. I've never tried to
> compact memory because cache dropping is usually enough when using 2M
> huge pages.
>
> Is there no better way of doing this? The kernel could selectively drop
> cache pages to make huge pages without dropping all caches and if that
> is not enough it could compact only the memory needed. I've looked for
> an option like that but I've haven't found anything. The closest thing
> I've seen is echo "always" >/sys/kernel/mm/transparent_hugepage/defrag
> but that is only for transparent huge pages.
>
> _______________________________________________
> vfio-users mailing list
> vfio-users at redhat.com
> https://www.redhat.com/mailman/listinfo/vfio-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20160709/f4ece6cd/attachment.htm>


More information about the vfio-users mailing list