[vfio-users] Application examples using vfio ?

Ran Shalit ranshalit at gmail.com
Tue Feb 6 17:41:44 UTC 2018


On Tue, Feb 6, 2018 at 7:31 PM, Alex Williamson
<alex.williamson at redhat.com> wrote:
> On Tue, 6 Feb 2018 18:59:43 +0200
> Ran Shalit <ranshalit at gmail.com> wrote:
>
>> On Tue, Feb 6, 2018 at 5:41 PM, Alex Williamson
>> <alex.williamson at redhat.com> wrote:
>> > On Tue, 6 Feb 2018 17:20:08 +0200
>> > Ran Shalit <ranshalit at gmail.com> wrote:
>> >
>> >> On Tue, Feb 6, 2018 at 4:56 PM, Alex Williamson
>> >> <alex.williamson at redhat.com> wrote:
>> >> > On Tue, 6 Feb 2018 16:24:06 +0200
>> >> > Ran Shalit <ranshalit at gmail.com> wrote:
>> >> >
>> >> >> Hello,
>> >> >>
>> >> >> I need to write vfio userspace driver for pci express.
>> >> >> I've been searching for some examples of using vfio, but did not find any,
>> >> >> Is there a git/examples of userspace application using vfio ?
>> >> >
>> >> > git://git.qemu.org/qemu.git
>> >> > git://dpdk.org/dpdk
>> >> > https://github.com/andre-richter/rVFIO
>> >> > https://github.com/MicronSSD/unvme
>> >> > https://github.com/awilliam/tests
>> >>
>> >> Thank you.
>> >> I am trying to understand how to use it with dma.
>> >> I've been checking now the links, but did not find yet in the above
>> >> example how to both map DMA  memory and read/write from that memory.
>> >> I see that the above test git contains tests which map/unmap DMA, but
>> >> not reading/writing from that dma memory afterwards (there is a
>> >> seperate test which read/write from memory but without dma).
>> >> Is that correct or did I miss something ? Is there any example which
>> >> shows how to both map and then read/write ?
>> >
>> > Initiating DMA is device specific, you need to understand the
>> > programming model of the device for that.  The vfio DMA ioctls only map
>> > the buffers through the IOMMU to allow that DMA, they don't control the
>> > device.  The unit tests in the last link doesn't have any device
>> > specific code, they're only performing the mapping without actually
>> > initiating device DMA.  The QEMU examples also mostly configure DMA but
>> > rely on native drivers within the guest VM to program the device to
>> > initiate DMA.  The rVFIO project is also just a wrapper to make vfio
>> > more accessible to scripting languages, so I don't expect actual device
>> > driver code there.  DPDK and the NVME driver should have actual driver
>> > code for the device though.  In fact there's also an NVME project
>> > within QEMU that makes use of vfio that might have driver code.  The
>> > simple logic though is that the DMA map ioctl takes a user provided
>> > virtual buffer and maps it through the IOMMU at the user provided IOVA
>> > (I/O virtual address).  It's up to the user to make the device perform
>> > DMA using that IOVA.  Note that the IOMMU address width is typically
>> > less than the user virtual address space, so identity mapping the IOVA
>> > to the virtual address is not a good strategy.
>> >
>>
>> I try to see that I understand the tests code.
>> Seems that I understand most of it except the following:
>> in vfio-pci-device-open-igd.c
>> there is :
>> ...
>> struct vfio_region_info *region;
>> struct vfio_info_cap_header *header;
>> buf = region = malloc(region_info.argsz);
>> ioctl(device,   VFIO_DEVICE_GET_REGION_INFO, region))
>> printf("First cap @%x\n", region->cap_offset);
>> header = buf + region->cap_offset;
>> printf("header ID %x version %x next %x\n",
>>        header->id, header->version, header->next);
>> ....
>> I don't understand some stuff here:
>> region and bug point to the same place (vfio_region_info struct) , is
>> it that (buf + region->cap_offset) is a pointer to virtual address
>> which can be accessed just like that (without any allocation)?
>
> FWIW, this has nothing to do with DMA, this is just the semantics of
> the vfio API and these ioctls are only reading capability information
> provided by the kernel interface.  The malloc is for a structure which
> is filled by the ioctl.  Part of the return data in that structure is
> the cap_offset, which tells us the offset into that structure for the
> head of a chain of capabilities, each of which has a
> vfio_info_cap_header.  The buf variable is simply a void* pointer to
> make indexing that capability chain easier.  We're only indexing within
> the allocated buffer.
>
>> in dpdk pci_vfio I see that it is accessed in another way, through vfio_dev_fd:
>> ret = pread64(fd, &reg, sizeof(reg),
>> VFIO_GET_REGION_ADDR(VFIO_PCI_CONFIG_REGION_INDEX) +
>> cap_offset);
>
> This is an entirely different operation, also not doing DMA.  This is
> simply reading from PCI config space of the device via the region
> designated for this access.  Perhaps you might want to start with an
> overview of vfio to help you get situated, hopefully this will help:
>
> https://www.youtube.com/watch?v=WFkdTFTOTpA
>

I understand that these last questions were not related to DMA.
I wanted to make sure I understand the access in BARs, as I see in the
test code. Sorry for switching between subjects.
What seems to confused me is that
struct vfio_region_info has no vfio_info_cap_header* field, yet we
access vfio_info_cap_header in that struct (struct vfio_region_info) ,
i.e. header points somewhere inside struct vfio_region_info to
vfio_info_cap_header.
Best Regards,
ran


> Thanks,
> Alex




More information about the vfio-users mailing list