Your Red Hat account gives you access to your member profile and preferences, and the following services based on your customer status:
Not registered yet? Here are a few reasons why you should be:
- Browse Knowledgebase articles, manage support cases and subscriptions, download updates, and more from one place.
- View users in your organization, and edit their account information, preferences, and permissions.
- Manage your Red Hat certifications, view exam history, and download certification-related logos and documents.
Your Red Hat account gives you access to your member profile, preferences, and other services depending on your customer status.
For your security, if you're on a public computer and have finished using your Red Hat services, please be sure to log out.Log out
vDPA device in userspace (VDUSE) is an emerging approach for providing software-defined storage and networking services to virtual machine (VM) and container workloads. The vDPA (virtio data path acceleration) kernel subsystem is the engine behind VDUSE. If you're not familiar with the vDPA kernel framework, please refer to our Introduction to vDPA kernel framework and vDPA bus drivers for kernel subsystem interactions blogs to become familiar with concepts such as vDPA bus, vDPA bus driver, and vDPA devices, as we assume readers are familiar with those topics in this blog.
In a nutshell, VDUSE enables you to easily implement a software-emulated vDPA device in userspace to serve both VM and container workloads.
vDPA was originally developed to help implement virtio (an open standard control and dataplane) in dedicated hardware (such as smartNICs). This only required supporting the virtio dataplane in hardware and using vDPA to translate the vendor-specific control plane to a virtio control plane (significantly simplifying the vendor's work).
VDUSE has evolved to provide a software-based vDPA device (versus the previous hardware vDPA device) that can leverage the vDPA kernel subsystem to provide standard interfaces for both VM and container workloads. This is useful for optimized userspace applications, such as Storage Performance Development Kit (SPDK) and Data Plane Development Kit (DPDK) apps that require an efficient interface to connect to all workloads (VMs and containers) running on a machine.
Compared to a hardware vDPA implementation, a vDPA userspace device has the following advantages:
Fast and flexible development: You can make use of lots of userspace libraries and reuse device emulation codes in QEMU and rust-vmm.
Improved maintainability: For example, it’s easier to perform a live upgrade for a userspace application than for a kernel module or hardware.
Ease of deployment: There are no hardware limitations, and the userspace application can be integrated easily into cloud-native infrastructure.
Powerful ecosystem: It’s possible to leverage an existing userspace dataplane, such as SPDK and DPDK, for both VMs and containers.
This blog presents the VDUSE architecture and reviews several use cases demonstrating its usage.
VDUSE's infrastructure includes two key blocks: a VDUSE daemon located in the userspace and a VDUSE module located in the kernel.
Figure 1: VDUSE kernel module and userspace daemon
The VDUSE daemon is responsible for implementing a userspace vDPA device. It contains device emulation and the virtio dataplane.
Device emulation is responsible for emulating a vDPA device. It contains two main functions:
Device initialization and configuration are done via the ioctl() interface provided by the VDUSE module.
Handling runtime control messages, such as setting device status, is implemented through the read()/write() interfaces.
The virtio dataplane is responsible for processing the request from the virtio device driver. The request’s data buffers should be mapped into the userspace through the mmap() interface in advance.
The VDUSE module in the kernel is responsible for bridging the VDUSE daemon and the vDPA framework so that the userspace vDPA device can work under the vDPA framework. It contains three functional modules:
VDUSE uses the char device interface to relay the vDPA config operation and memory-mapping information to userspace. It does it by using userspace interfaces such as ioctl(), read(), write(), and mmap().
The vDPA device connects the VDUSE module to the vDPA framework. By attaching it (by implementing the common vDPA bus operations) to the vDPA bus, the VDUSE module can receive the control messages from the vDPA framework. Then the VDUSE module can handle it in place or forward it to the VDUSE daemon.
The memory management unit (MMU)-based software input/output translation lookaside buffer (IOTLB) enables the VDUSE daemon to access the data buffer in kernel space. It implements a bounce-buffering mechanism so that the data can be safely accessed by userspace.
VDUSE support for containers
The key point in VDUSE container support is the vDPA bus driver that the userspace vDPA device is attached to. Currently, the vDPA kernel framework supports two types of vDPA bus drivers: virtio-vdpa (for containers) and vhost-vdpa (for VMs).
If you want to provide an interface to container workloads via VDUSE, the vDPA device should be bound with virtio-vdpa (as shown below).
Figure 2: Serving container workloads via VDUSE
In this case, the virtio-vDPA bus driver presents a virtio device. Various kernel subsystems could be connected to this virtio device for userspace applications to consume.
As mentioned before, to enable the userspace VDUSE daemon to access the data buffer in the virtio device driver, an MMU-based software IOTLB with a bounce-buffering mechanism is introduced in the VDUSE kernel module for the dataplane.
The data is copied from the original data buffer in kernel space to the bounce buffer and back, depending on the direction of the transfer. Then the userspace daemon just needs to map the bounce buffer to its address space instead of the original one, which might contain other private kernel data in the same page.
VDUSE support for VMs
If the vDPA device is bound with vhost-vdpa, the VDUSE daemon can provide service to VM workloads, as shown below.
Figure 3: Serving VM workloads via VDUSE
In this case, a virtual host (vhost) device is presented by the vhost-vDPA bus driver, so it can be used as a vhost backend for virtio drivers running inside the VM.
In the dataplane, the VM’s memory will be shared with the VDUSE daemon. This way, the VDUSE daemon can access the data buffer residing in the userspace memory region directly without relying on the bounce-buffering mechanism.
VDUSE end-to-end solution
Now that you're familiar with how VDUSE connects to container and VM workloads, take a look at the overall solution serving both workload types.
Figure 4: VDUSE architecture overview
In Figure 4, the core components–a VDUSE daemon (userspace) and a VDUSE module (kernel)–are outlined with a red line.
The VDUSE userspace daemon can provide software-defined storage and networking services to container or VM workloads by binding the vDPA device created by the VDUSE kernel module to a different vDPA bus driver.
VDUSE use cases
The following use cases demonstrate two ways to gain value from VDUSE.
Access remote storage with VDUSE
The architecture of separating computing and storage means you usually need a way to access remote storage service from the VM and container in computing nodes. VDUSE implements a reliable solution for this case.
Figure 5: VDUSE solution for remote storage access
The VDUSE storage daemon is the core component of the whole solution. It uses the VDUSE framework to emulate a vDPA block device, then forwards the I/O request from VMs or containers to remote storage through the network.
Compared with other solutions, the VDUSE approach:
Provides a unified storage stack serving both VM and container workloads.
Offers better performance than existing solutions, such as a network block device (NBD) for container workloads. This is because the VDUSE approach to the dataplane is based on shared memory communication, which has fewer syscalls and data copies.
Enable SPDK apps to serve containers
You can also use VDUSE to enable existing SPDK applications focused on VM workloads (using the vhost-user interface) to provide the same services to container workloads. Figure 6 shows how this works.
Figure 6: Reuse vhost-user solution for container
Above, a VDUSE agent is introduced to bridge the container and SPDK daemon. On one hand, it uses the VDUSE framework to emulate a vDPA block device bound to virtio-vdpa bus drivers, and on the other hand, it acts as a vhost-user client to communicate with vhost-user server in the SPDK daemon.
With the VDUSE framework, the VDUSE agent can fetch the memory regions (including the available ring, used ring, descriptor tables, and bounce buffer containing virtio request data) used in the virtio-blk device driver's dataplane.
Then, through the vhost-user protocol, the VDUSE agent can transfer them to the SPDK daemon. Thus, when the existing SPDK dataplane follows the virtio spec to access those memory regions, it accesses the data in the kernel virtio-blk device drivers. And the VDUSE module is responsible for copying data to and from the bounce buffer in this flow.
VDUSE is a new kernel framework based on vDPA. It enables you to emulate a software vDPA device in userspace.
This technology aims to provide a new userspace approach for providing storage and networking services serving container and VM workloads.
About the authors
Experienced Senior Software Engineer working for Red Hat with a demonstrated history of working in the computer software industry. Maintainer of qemu networking subsystem. Co-maintainer of Linux virtio, vhost and vdpa driver.