As your organization scales its Red Hat OpenShift platform to support mission-critical workloads, your networking requirements often extend beyond a single load balancing solution. Many environments adopt a hybrid approach: Use software-defined load balancers (such as MetalLB) for internal, east-west traffic, and rely on enterprise-grade appliances like F5 BIG-IP to handle public-facing ingress at the network edge. However, operating multiple load balancer controllers within the same OpenShift cluster requires careful governance. Without clear boundaries, controllers can attempt to manage the same Kubernetes service resources, leading to unpredictable behavior and operational risk.

Why governance matters in multi-controller environments

In a large-scale OpenShift deployment, networking is a foundational component of platform reliability and security. A governed approach to load balancing helps organizations meet several critical objectives:

  • Operational stability for production workloads: Prevents race conditions and inconsistent IP assignments when multiple controllers attempt to reconcile the same service.
  • Clear separation of internal and external traffic: Help ensure internal application endpoints are not inadvertently exposed through external-facing infrastructure.
  • Flexibility without platform lock-In: Allows teams to use software-defined and hardware-based load balancers together, selecting an appropriate solution for each use case.
  • Low operational overhead: Reduces manual intervention and troubleshooting by allowing controllers to act only on services they are explicitly responsible for.

The challenges of controller contention

By default, any controller capable of handling a Kubernetes service of type LoadBalancer can attempt to do so. In clusters where multiple controllers are active, this can result in contention, with each controller independently trying to assign an address or configure networking for the same service.

The consequences include IP reassignment, configuration drift, and increased operational noise. In regulated or security-sensitive environments, this lack of determinism can introduce unacceptable risk.

Intent-based control with loadBalancerClass

OpenShift addresses this challenge by supporting the Kubernetes loadBalancerClass field. This field allows platform teams to explicitly associate a service with a specific load balancer implementation. Controllers that do not recognize or own the specified class ignore the service, eliminating contention and enforcing clear responsibility boundaries.

Implementing a governed, multi-tier load balancing model

With OpenShift, you can take a governed approach for load balancing. It's a two-step process:

1. Internal services use MetalLB

For internal services, MetalLB can be configured as the default load balancer by omitting the loadBalancerClass field. MetalLB reconciles these services automatically.

apiVersion: v1
kind: Service
metadata:
  name: svc-internal-metallb
spec:
  type: LoadBalancer
  selector:
    app: demo-app
  ports:
    - port: 80
      targetPort: 8080

2. External services use an enterprise load balancer

For externally exposed services, the loadBalancerClass field is explicitly set. MetalLB detects that it is not the intended provider, and ignores the service, allowing the designated external controller to manage it.

apiVersion: v1
kind: Service
metadata:
  name: svc-external-edge
spec:
  type: LoadBalancer
  loadBalancerClass: f5.com/cis
  selector:
    app: demo-app
  ports:
    - port: 80
      targetPort: 8080

Hardening MetalLB operation on OpenShift

MetalLB relies on "speaker" pods to advertise service addresses using ARP or BGP. On OpenShift, these speaker pods require elevated permissions to interact with the host network. Specifically, the speaker service account must be granted the hostnetwork and privileged security context constraints (SCCs):

oc adm policy add-scc-to-user hostnetwork -z speaker -n metallb-system
oc adm policy add-scc-to-user privileged -z speaker -n metallb-system

Restart the pods to apply the new security context:

oc delete pod -l component=speaker -n metallb-system

Recommended practices

Here are some things to keep in mind as you consider load balancing:

  • Define load balancer classes early: Establish clear and consistent class names across environments to avoid ambiguity and configuration drift.
  • Plan for immutability: The loadBalancerClass field cannot be modified on an existing service. Changes require a delete and recreate workflow, which should be incorporated into your GitOps pipelines.
  • Monitor speaker health: Speaker pod availability directly affects address advertisement. Monitoring and alerting should be in place to detect unexpected restarts or failures.

Conclusion

Supporting multiple load balancer controllers in a single OpenShift cluster is a common enterprise requirement, but it must be approached deliberately. This model allows organizations to balance developer agility with enterprise networking controls—without sacrificing reliability or operational clarity.

제품 체험판

Red Hat OpenShift Container Platform | 제품 체험판

컨테이너화된 애플리케이션을 빌드하고 규모를 확장하기 위한 일관된 하이브리드 클라우드 기반입니다.

저자 소개

Viral Gohel is a Senior Technical Account Manager at Red Hat. Specializing in Red Hat OpenShift, middleware, and application performance, he focuses on OpenShift optimization. With over 14 years at Red Hat, Viral has extensive experience in enhancing application performance and ensuring optimal OpenShift functionality.

UI_Icon-Red_Hat-Close-A-Black-RGB

채널별 검색

automation icon

오토메이션

기술, 팀, 인프라를 위한 IT 자동화 최신 동향

AI icon

인공지능

고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트

open hybrid cloud icon

오픈 하이브리드 클라우드

하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요

security icon

보안

환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보

edge icon

엣지 컴퓨팅

엣지에서의 운영을 단순화하는 플랫폼 업데이트

Infrastructure icon

인프라

세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보

application development icon

애플리케이션

복잡한 애플리케이션에 대한 솔루션 더 보기

Virtualization icon

가상화

온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래