[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: [vfio-users] vfio fails Guest FreeBSD9.3 host Fedora 23



1G hugepages are allocated with grub at boot time.

[root localhost vcr]# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="rd.lvm.lv=fedora/root rd.lvm.lv=fedora/swap hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on rhgb quiet"
GRUB_DISABLE_RECOVERY="true"
 and after doing grub2-mkconfig boot file looks like 

      if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  92f30a97-3f52-4352-a28d-ce0ba665377d
        else
          search --no-floppy --fs-uuid --set=root 92f30a97-3f52-4352-a28d-ce0ba665377d
        fi
        linux16 /vmlinuz-4.4.6-300.fc23.x86_64 root=/dev/mapper/fedora-root ro rd.lvm.lv=fedora/root rd.lvm.lv=fedora/swap hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on rhgb quiet

As there's isn't any 1G default huge page size in grub command line, it defaults to 2M huge pages
- cat /proc/meminfo
VmallocUsed:           0 kB
VmallocChunk:          0 kB
HardwareCorrupted:     0 kB
AnonHugePages:    131072 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:   65536
HugePages_Free:    32768
HugePages_Rsvd:       32768
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:     1601216 kB
DirectMap2M:    72767488 kB
DirectMap1G:    330301440 kB

[root localhost vcr]# cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
32768
[root localhost vcr]# cat /sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
16
[root localhost vcr]# cat /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages
16
[root localhost vcr]# cat /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
32768

i set 2M hugepages with following command respectively

echo 32768 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
echo 32768 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages

will update here after trying your steps..

-chintu-




On Wed, Jun 1, 2016 at 6:56 PM, Alex Williamson <alex williamson redhat com> wrote:
On Wed, 1 Jun 2016 15:46:10 -0400
chintu hetam <rometoroam gmail com> wrote:

> hugepages doesn't work with VFIO it works with vhost.

TBH I don't have a lot of faith in your setup since you haven't really
shown how you're allocating 1G hugepages, or even 2M, which many people
here are using successfully.  Here's a simple test:

# cd /sys/kernel/mm/hugepages/hugepages-1048576kB/
# cat nr_hugepages

How many do you have available?  If less than 32, then:

# echo 32 > nr_hugepages

Recheck how many are available, you may not be able to get 32 except
via boot options.  If less, adjust the VM memory size accordingly in
the command below.

Mount your 1G hugepages:

# mkdir /hugepages1G
# mount -t hugetlbfs -o pagesize=1G hugetlbfs /hugepages1G

Bind your device to vfio-pci:

# virsh nodedev-detach pci_0000_aa_06_0

Start a simple VM:

# /usr/bin/qemu-kvm -m 32G -mem-path /hugepages1G -nodefaults -monitor
stdio -nographic -device vfio-pci,host=aa:06.0

This will give you a (qemu) prompt where you can type 'quit' or ^C to
kill it.  If this works, then your problem is with configuring your
system to use 1G hugepages or with libvirt.  Please test.  Thanks,

Alex



--
-chintu-

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]