Skip to main content

How to balance virtual machine traffic with Kubernetes services

Kubernetes service constructs create highly available services in mixed container and VM environments without any external components.
Image
Antique scale balancing weights

Photo by Piret Ilver on Unsplash

Red Hat OpenShift supports the best of both virtualization worlds: virtual machines (VMs) and containers. In the containers and Kubernetes world, the "services" model permits external access to and consumption of applications that are deployed as containers within the pods. This configuration allows you to define simple ingress points to applications with load balancing. However, in the VM world, external load balancers are traditionally used to group services residing in VMs.

This article explains how to balance incoming traffic to multiple VMs using a Kubernetes service approach. This method provides a consistent way of doing similar jobs and prevents needing to consume or pay extra for load balancing.

[ Learn the differences between VMs and containers. ]

How OpenShift creates VMs

When you create a VM with OpenShift, the VM is assigned an Internet Protocol (IP) address from a pool of service addresses. A launcher pod accompanies the VM to allow remote Secure Shell (SSH) access to it, and the launcher pod is exposed through a Kubernetes service NodePort so that the VM can be accessed from outside of the cluster.

Image
SSH access to VMs
Figure 1. How a KubeVirt VM gets SSH access (Fatih Nar, Rimma Iontel, CC BY-SA 4.0)

In Figure 1, despite the appearance of the KubeVirt VM exposing its IP address as a reachable address (Box A), that address is owned by the launcher pod (see Box X in Figure 2). The launcher pod acts as a sidecar proxy to the VM, which has only a pod namespace-level address (See Box Y in Figure 2). 

Image
pod-to-VM connection
Figure 2. Chain of network pod and VM interaction (Fatih Nar, Rimma Iontel, CC BY-SA 4.0)

From an SSH perspective, when a request is sent to either the service cluster's IP with a service port or to a Kubernetes node with NodePort, the request is passed directly to the VM listening on the pod port.

[ Learn more about cloud-native development in the eBook Kubernetes Patterns: Reusable elements for designing cloud-native applications. ]

Implement load balancing with multiple VMs

You can take advantage of this internal design to implement load balancing with multiple VMs by using a Kubernetes service with VM launcher pods. There are two steps to accomplishing this:

1. Assign a common pod selector label across all launcher pods.

Image
launcher pod labeling
Figure 3. Launcher pod labeling (Fatih Nar, Rimma Iontel, CC BY-SA 4.0)

2. Create a service that targets the service running inside the VMs (in this example, a simple web portal).

Image
VM load balancing
Figure 4. VM load balancing Kubernetes service (Fatih Nar, Rimma Iontel, CC BY-SA 4.0)

Regarding the Kubernetes service type:NodePort in the example above:

  1. You can swap the service type:NodePort with type:LoadBalancer, even though you are still doing Kubernetes-native load-balancing with VMs.
  2. Not all IP+NodePort uses are internet-reachable, as Node IP addresses can be routable only within local enterprise networks. However, consumers of that VM-based service leverage the service type:NodePort. As a matter of fact, most enterprise IT traffic stays within private enterprise networks. Also, 5G user equipment (UE) traffic hits the internet only after it breaks out from 5G user plane function (UPF) cloud-native network functions (CNFs). Anything before that (that is, all local 5G network fabric traffic) is not internet accessible.

Test VM traffic

You can test whether you can reach the web service running on the grouped VMs from within the cluster but from a different namespace.

Image
traffic test
Figure 5. A traffic test within the same cluster but a different namespace using the service's fully qualified domain name (FQDN) (Fatih Nar, Rimma Iontel, CC BY-SA 4.0)

That was successful, so now test outside the cluster:

$ curl http://api.acmhub2.narlabs.io:31923
Welcome to fenar-centos-vm02!
$ curl http://api.acmhub2.narlabs.io:31923
Welcome to fenar-centos-vm02!
$ curl http://api.acmhub2.narlabs.io:31923
Welcome to fenar-centos-vm01!
$ curl http://api.acmhub2.narlabs.io:31923
Welcome to fenar-centos-vm02!

[ Build a flexible foundation for your organization. Download An architect's guide to multicloud infrastructure. ]

Create highly available services

You can leverage Kubernetes service constructs to create highly available services in a mixed container and VM environment, and you can do it without the need for any external components. This approach can be very handy in small-footprint and edge deployments where container and VM workloads coexist.

Topics:   Kubernetes   Virtual machines   OpenShift   5G   Edge computing  
Author’s photo

Fatih Nar

Fatih (aka The Cloudified Turk) has been involved over several years in Linux, Openstack, and Kubernetes communities, influencing development and ecosystem cultivation, including for workloads specific to telecom, media, and More about me

Author’s photo

Rimma Iontel

Rimma Iontel is a chief architect in Red Hat's Telecommunications, Entertainment, and Media (TME) Technology, Strategy, and Execution office.  She is responsible for supporting Red Hat’s global ecosystem of customers and partners in the telecommunications industry. More about me

Navigate the shifting technology landscape. Read An architect's guide to multicloud infrastructure.

OUR BEST CONTENT, DELIVERED TO YOUR INBOX

Privacy Statement