Abonnez-vous au flux

I attended the KVM Forum in August of this year, and as a new Red Hatter with a lot of VMware experience, it was eye opening. I am seeing a lot of interest in Red Hat Virtualization from my customers, and so I wanted to understand the platform at a much deeper level. KVM is the technology that underpins the Red Hat Virtualization platform. A number of themes emerged for me as I attended sessions and enjoyed the hallway track. This was a forum for developers by developers, so infrastructure types like myself were far and few between but that did not impact my enjoyment of the conference. In fact, as a technical guy coming from a server background, I learned a lot more than I would have learned from a typical infrastructure focused conference. Below, I will highlight some topics that stood out to me.

Containers and Virtualization

One of the over-arching themes I noticed at the Forum, was how developers can make virtualization more attractive for users who are considering containers as a fundamental layer of abstraction. Developers are intrigued with the possibility of deploying infrastructure in minutes rather than weeks. Containers also allow developers to define the infrastructure needed for their applications and ensure that each time their application is deployed it gets the same resources no matter the environment. Containers do not provide the same level of isolation as virtual machines. A VM has an entire operating system to itself that believes it’s installed on it’s own (virtual) hardware. It must share resources with the other VMs running on the same host, but the hypervisor proxies access to physical resources. A container has only the binaries and libraries defined as required available to it, and runs on the same (Linux) kernel as other containers. The kernel proxies access to physical resources like it does any other user mode process. Cgroups and namespaces provides isolation. For environments where high security or guaranteed performance is required, this is not a sufficient level of isolation.

Speed of deployment is one of the main drivers for adoption of containers. Because of the light level of isolation, containers can be deployed very quickly. Rather than provisioning an entire virtual server and installing an operating system, when deploying a container we need only define the container parameters and download any libraries we might be missing. KVM developers want to help users enjoy the benefits of both technologies by making VM deployments as speedy as containers. That means we want to pre-empt as much as possible so a responsive Operating System is available as soon as possible after provisioning.

I/O Performance

By far, the bulk of the conversation about KVM development in 2016/2017 was about improving I/O performance. As I mentioned, I’m new to Red Hat but not new to virtualization. I find a lot of my conversations about virtualization with customers inevitably lead to a discussion of I/O performance. There are a few different tracks of development related to I/O performance. KVM-rt is the KVM Realtime work. Some notable sessions that addressed this topic include:

  • Real Time KVM by Red Hat’s Rik van Riel. Rik goes into some detail around the challenges of realtime in the context of KVM virtualization.
  • Alex Williamson’s presentation on PCI Device Assignment with VFIO was awesome. His GitHub presentation gets into even more detail. He spends a good deal of time explaining how VFIO works, starting at basic device assignment and building up to VFIO. I’ve seen benchmark results from VFIO testing and have worked with some customers testing out VFIO. As we see more implementations of KVM for applications like NFVI, I expect we will need to take advantage of VFIO to supply direct device access to Virtual Machines in a safe way.
  • Wei Wang from Intel’s presentation proposed a new way of communication between VMs - Vhost-pci. His testing was an interesting experiment in inter-VM connectivity. He is working to speed up communication between VMs, focused on the NFVI. VNFs can be sensitive to latency, and his approach could possibly improve performance by shortening the path from VM to VM. There was a great deal of feedback from the developers in attendance. You can find more details about the proposal from the qemu-devel mailing list.  
  • The conversation around virtualizing GPUs continues to get more interesting. Neo Jia and Kirti Wankhede presented Nvidia’s recommended approach to a mediated I/O device framework. This framework is VFIO based and built to allow virtualization of devices without SR-IOV. The framework would allow for a full discovery of virtual devices and allow for full lifecycle management because the device becomes virtual! The duo from Nvidia detailed their approach to the problem and demonstrated a functional environment leveraging an Nvidia Tesla M60. Virtual devices were created, then passed to QEMU as vfio-pci devices.

Continuing Security Efforts

Security is a theme woven into the fabric of the KVM project.

Jun Nakajima presented the results of an Intel PoC to secure the kernel in virtualized environment. The focus of the conversation was hardening the VMM (Virtual Machine Manager or hypervisor) to ensure guest VMs are isolated from the host kernel even more than in today’s standard KVM deployment. The Intel team is also testing enhancements to the VMM that could be offloaded to hardware. Fascinating work, you can find the deck here.

Steve Rutherford from Google walked through his team’s approach to hardening virtualization hosts while preserving performance. Their focus is on reducing the attack surface available to guest VMs. Moving more KVM code to Userspace helps, but risks an impact to performance. His presentation dives into how they’ve approached the balance between performance and security.  Steve and team spent a good deal of time testing performance and shared these results with the group. You can find these results in the linked presentation.  

AMD is looking at ways to secure access to memory. Thomas Lendacky presented their approach: Secure Encrypted Virtualization”. They suggest offloading inline memory encryption and encryption key management to on-die engines they have built. It’s an interesting approach to isolate VMs and containers from each other, as well as isolating the workloads from the underlying hypervisor. AMD is still developing the firmware (proprietary), Linux drivers, kernel support and KVM/QEMU support.

The Future of KVM

A number of sessions throughout the 3-day forum were focused on where KVM is going. For a community driven project like KVM, the roadmap isn’t just about features and bug fixes, it’s also about how the community can work together better. How can we be more efficient communicating, reviewing and accepting patches?

A good introduction to the topic is the panel discussion with some key figures in KVM development. Daniel Berrange and Paolo Bonzini from Red Hat, Andrew Honig from Google, and Den Lunev from Virtuozzo had a conversation about where KVM has been, and where it’s going in future as they reviewed pre-canned questions and addressed questions from the collected development team.

Stefan Hajnoczi spoke in detail about how Open Source Internships can help the QEMU Community Grow in his presentation. The QEMU project has participated in the Google Summer of Code since 2010, and in 2014 also began working with Outreachy. Stefan went on to outline how these programs have benefited the community and some guidelines for other projects considering participating in similar mentoring programs.

Lessons Learned

The KVM developer community is very accepting. Perhaps a few folks reading will chuckle at that statement. I’m sure submitting patches can be frustrating at times as with any large project. But in my short experience interacting with folks, everyone was quick to share information and were very open to explaining development concepts to a server guy.

The only accurate documentation in a project like KVM is the code. I’ve since gone back to the C programming language, refreshing my development skills so I can better understand what’s happening. When looking for details about the new developments in KVM, the best place to go to, is the code. The beautiful thing about KVM as a virtualization platform is that everything is out in the open. That provides users with some powerful abilities in terms of understanding performance, troubleshooting issues.

There continues to be a vibrant community focused on virtualization. There are of course some very large organizations contributing to KVM, but the lion’s share of code comes from independent developers. That makes for a lively community producing great code!

This was a sampling of the themes and amazing presentations from the 2016 KVM Forum. You can find the entire agenda here and see many more sessions!

À propos de l'auteur


Parcourir par canal

automation icon


Les dernières nouveautés en matière d'automatisation informatique pour les technologies, les équipes et les environnements

AI icon

Intelligence artificielle

Actualité sur les plateformes qui permettent aux clients d'exécuter des charges de travail d'IA sur tout type d'environnement

open hybrid cloud icon

Cloud hybride ouvert

Découvrez comment créer un avenir flexible grâce au cloud hybride

security icon


Les dernières actualités sur la façon dont nous réduisons les risques dans tous les environnements et technologies

edge icon

Edge computing

Actualité sur les plateformes qui simplifient les opérations en périphérie

Infrastructure icon


Les dernières nouveautés sur la plateforme Linux d'entreprise leader au monde

application development icon


À l’intérieur de nos solutions aux défis d’application les plus difficiles

Original series icon

Programmes originaux

Histoires passionnantes de créateurs et de leaders de technologies d'entreprise