If we’ve learned one thing about IT at-scale over the past several years, it’s that there is no “silver bullet” when it comes to choosing deployment environments. Virtualization, private cloud, public cloud, and Kubernetes have all entered the arena, but there is no clear winner—yet. Instead, IT organizations face layers of complex infrastructure technologies, each with various facets of abstraction and their own “rules” with the added challenge of making these disparate stacks play nicely together for the benefit of the business at-large.
The virtualization layer is frequently a fulcrum for deploying emerging technologies like Linux containers and Kubernetes but the ultimate connector across all layers of a technology stack, both cloud-native and existing, is networking. All workloads be it VMs, containers or bare metal apps need to efficiently traverse these technology stacks and reach the server’s NIC level in order to communicate with other nodes, servers or the outside world.
In order to move these workloads fast a variety of bespoke solutions have been developed by the network equipment providers achieving wirespeed performance. However these solutions have developed in a non-standard manner, sometimes propagating to the workloads themselves. This is where the virtio-networking community intends to help.
Led by Red Hat, Intel, Mellanox and many other software and hardware vendors, the virtio-networking is built around virtio, a standardized open interface for virtual machines (VMs) to access simplified devices such as block storage and networking adaptors. The virtio-networking community focuses on the networking device of virtio.
The emerging use cases around virtio-networking
While the virtio networking device was originally developed as a network virtualization interface between physical hosts and guests in virtual environments, a number of open source communities have adopted this networking device as a means to addressing emerging networking challenges. The Linux Kernel community, the Data Plane Development Kit (DPDK) community, QEMU and OASIS among others all lean on these specifications, broadly forming the virtio-networking community. The problems that this community aims to solve include:
VM network acceleration by developing an open standard interface for VM network acceleration using kernel tools, DPDK tools and hardware acceleration techniques to offload traffic directly onto physical network interface cards (NICs).
Pod network acceleration to speed up networking in Kubernetes by adding dedicated layer 2 (L2) high-speed interfaces to Kubernetes pods.
Mixed virtual/cloud-native environment acceleration through an open standard interface for running (efficiently and swiftly) virtual machines and Kubernetes pods side-by-side.
Hybrid cloud acceleration via an interface to abstract away the different public and private clouds from the VMs and Kubernetes pods running on them, especially when network acceleration is required.
Exploring virtio and the community further
The challenges listed above are no small matters. They impact businesses today and, if successfully solved, will help to shape the interconnected enterprise IT world of the future. We believe that virtio is part of the solution to address these challenges, and we want to explain how and why this effort can make it a reality in the near future.
The next blog posts will include solution overviews for audience interested in the big picture, technical deep dive for architects interested in the nuts & bolts of things and hands on sessions for developers to experiment with these technologies first handed.
So stay tuned for future blog posts as we further explore how virtio works and how it can help address these emerging use cases!