Subscribe to the feed

Kubernetes is a well-established platform for hosting microservices. It facilitates a cloud-native approach to application development. Coupled with DevOps and GitOps tooling, it has essentially become a standard platform for containerized services across multiple industries.

However, Kubernetes alone isn't likely to address your complete needs for application development and the post-deployment operational tasks that enable the mature, reliable, and predictable execution of these applications.

[ Leverage reusable elements for designing cloud-native applications. Download the O'Reilly eBook Kubernetes Patterns. ]

Complementary solutions in the market fill gaps and ameliorate weaknesses on platforms where Kubernetes is the underlying engine. They come in the form of Kubernetes-native solution packages known as Kubernetes Operators, available on the open source Operator Hub. These packages include GitOps and DevOps pipelines, service mesh, performance-monitoring tools, and multicluster management.

An end-to-end technology stack is a good starting point. But your goal is to design the deployment model to reach the consumers and support the backend systems wherever they are and offer them high-performance and cost-effective outcomes. This is how you convert a technology stack into a successful business solution. When you locate the service targets at the edge, edge computing becomes crucial for media and communication services and offerings.

This article examines Kubernetes deployment models for edge applications. It addresses enabling north-south (external consumers) and east-west (backend systems) communication between different infrastructure types hosting the same application platform for developer and operational consistency.

The need and possible solutions

Placing certain services in close proximity to the consumers has great benefits, including low-latency response, bandwidth consumption savings, and data locality. However, there are also multiple challenges. One of the key challenges with the Kubernetes deployment model is the placement of the Kubernetes control plane that manages the workers that comprise the resource pools consumed by the applications and services. The two main options for control plane placement are:

  • Deploying full-fledged cluster(s), complete with control nodes and worker nodes, everywhere you need your applications to be accessible
  • Deploying worker nodes at the edge and connecting them to the central location hosting the control plane
Option 1: Full cluster model

You can simplify option one (the full-cluster model shown above) with innovative deployment models:

  • A compact high availability (HA) cluster with a minimum of three nodes accommodating both control plane and worker node roles
  • An all-in-one, single-node standalone cluster

You must have a dedicated control plane in both compact deployment models.

Option two (the remote worker approach shown below) eliminates the overhead of having a dedicated control plane at each location. Still, it may not be feasible if there is a significant latency, intermittent connectivity, or a lack of sufficient bandwidth for the cluster's internal services or operations between the Kubernetes control plane and the worker locations to function correctly.

Option 2: Remote worker approach

Suppose the network connectivity between the core cluster hosting the Kubernetes control plane and the remote worker nodes meets performance requirements (for example, when the latency is below the Kubernetes node-status-update-frequency). In that case, you can use a remote worker node (RWN) to cost-optimize the distributed application platform solution. We refer to this approach as "grid-platform," where the central site performs control and management tasks while remote sites deliver a platform with consumable resources.

[ Learn why open source and 5G are a perfect partnership. ]

What is grid-platform?

While you are making the application platform available wherever necessary, you also need to secure the traffic between applications hosted on a Kubernetes cluster and also make sure the breakout traffic to and from consumers is optimally placed to ensure performance, low cost, and a secure communication path. The diagram below shows a high-level view of RWNs.

High-level view of remote worker nodes

Central Kubernetes clusters get deployed in selected geolocations to serve nearby consumers. The remote workers expand the reach of the cluster to remote sites without affecting the integrity of the cluster control plane, maintaining its high availability and scalability. The diagram below shows a solution topology using a central cluster expanded with RWN.

Solution topology: Central cluster expanded with RWN

In the distributed deployment model, remote workers need access to the relevant cluster's internal communications so that the cluster control plane can monitor and manage them and make the workloads available for scheduling through the cluster workload scheduler. The remote workers also need to participate in the cluster domain name service (cluster-dns) hosted by control plane nodes, enabling the service discovery feature in service mesh solutions across the whole cluster.

Networking under the hood

Networking is the key functionality in every distributed computing solution, so it is a critical part of Kubernetes clusters. Central cluster nodes share similar primary networking configurations, including network interface definitions, network bridges, routes, and DNS server configurations, as they run on the same infrastructure. However, you expect the remote workers to get deployed on different infrastructures, so they normally have site-specific networking configurations.

The Kubernetes community is increasingly using Open Virtual Network (OVN) fabric with Internet Protocol security (IPSec) networking solutions. It enables IPSec egress to be assigned to tenant namespaces on desired worker nodes through node labeling, breaking out traffic on premises with RWNs.

[ Learn 16 steps for building production-ready Kubernetes clusters. ]

You should consider the RWN approach mainly with a long-living control plane implementation, where the short-term loss of the control plane would not cause critical service outages. The distance between remote workers and the control plane nodes must be within a latency range where keepalive timers do not time out so that RWNs don't get marked as unhealthy or unreachable by the control plane. That's why you might see YAML like this: 

$ cat egress-ip.yaml
apiVersion: k8s.ovn.org/v1
kind: EgressIP
metadata:
  name: edge-test-egressip
spec:
  egressIPs:
  - 172.27.200.5
  - 172.27.200.6
   namespaceSelector:
    matchLabels:
      env: prod
$ oc label nodes ip-172-27-201-49.ec2.internal k8s.ovn.org/egress-assignable=""

OVN-IPSec cluster networking allows the assignment of cluster traffic (north-south and east-west) to exit clusters in the desired location through remote worker nodes performing the networking breakout. You can achieve this for individual tenants using tenant namespace label selectors while pointing exactly where the traffic exits the cluster through which remote worker, node by node.

Allowing network breakouts on remote worker nodes enables low-latency access to consumers and backend systems with secure access.

Expand Kubernetes clusters

Telecommunications and media solutions use widely distributed systems over multiple geolocations, allowing them to reach a greater consumer base, be it human subscribers or machine-to-machine systems.

Kubernetes, with its origins in an enterprise datacenter, was not intended for deployment across distributed locations. But that doesn't mean it can't grow and adjust. This article offers some possible solutions to expand the scale of a Kubernetes cluster while constraining the failure domain.

The article showed the details of a distributed Kubernetes cluster networking and discussed how it allows Kubernetes clusters to build reliable low-latency access across wide geographical areas. This could be of significant value for many modern services, including 5G.


This article is adapted from Episode-II The Grid on Medium and is republished with permission.


About the authors

Rimma Iontel is a Chief Architect responsible for supporting Red Hat’s global ecosystem of customers and partners in the telecommunications industry. Since joining Red Hat in 2014, she’s been assisting customers and partners in their network transformation journey, helping them leverage Red Hat open source solutions to build telecommunication networks capable of providing modern, advanced services to consumers in an efficient, cost-effective way.

Iontel has more than 20 years of experience working in the telecommunications industry. Prior to joining Red Hat, she spent 14 years at Verizon working on Next Generation Networks initiatives and contributing to the first steps of the company’s transition from legacy networks to the software-defined cloud infrastructure.

Read full bio
UI_Icon-Red_Hat-Close-A-Black-RGB

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech