In the previous article, we outlined how to connect applications that run across hybrid cloud environments. We saw how a layered approach with an enterprise Kubernetes platform, API management and service mesh can address north-south, east-west and network connectivity with the right isolation and separation of concerns.
Hybrid cloud environments and cloud-native applications are evolving, and with them the requirements for application connectivity evolve as well. Therefore, unified solutions that address network and application connectivity concerns together are required to provide abstraction and observability across the environment.
In this article, we describe the use cases and concerns driving the future hybrid cloud connectivity considerations and present a new, comprehensive application connectivity solution for hybrid cloud environments.
The evolution of hybrid cloud connectivity
Hybrid cloud experience extends beyond a single Kubernetes cluster or a single cloud provider/host. In this reality, applications can be distributed across multiple cloud providers, multiple clusters, on-premise virtual machines (VM), bare-metal hosts, or SaaS services. Thus the reliance on use of a cluster as a permanent boundary of the application, binding the application identity to the cluster may prevent organizations from achieving the benefits of multicluster and multicloud hybrid deployments.
To address hybrid cloud application connectivity needs and embrace the benefits of multicluster application deployment topologies, a next-generation connectivity solution is needed. How do you deliver seamless connectivity to applications moving across multiple cloud environments and clusters while adapting existing application access control and networking rules, resolving service dependencies and preserving auditability and observability?
To let developers abstract where their applications run and to open up applications for transparent movement, replication and failover across clusters, this solution must provide a unified architecture beyond Kubernetes clusters.
Such an architecture must have a global networking reach, and allow traffic to flow between Kubernetes clusters, within Kubernetes, between services, data and control planes. Application and business layer connectivity concerns like authorization, authentication, rate limiting, traffic management, observability and telemetry should also be addressed seamlessly across the hybrid cloud environment.
While any externally accessible application that runs on Kubernetes will require some form of ingress solution, ingress itself is not part of core Kubernetes and each provider’s implementation may differ in terms of both technology and capabilities. If your applications need to span multiple Kubernetes clusters, you need to go beyond single cluster ingress and enable a global ingress load-balancing solution that can route traffic to those clusters.
We listed the main types of application connectivity needs previously. Let’s see a few concrete hybrid cloud use cases that developers are faced with today that relate to each connectivity area.
-
Moving application across clusters: The admins have moved my application onto a different cluster as part of an environment migration. Now my application is deployed onto a different cluster. The admins have set up the new roles and namespaces similar to the older cluster. I need an easy way to automatically route the traffic to my newly deployed application without changing the URL endpoints I provided to my consumers, and with no consumer-visible disruption (i.e., connection failures, HTTP errors, hangs, or timeouts). Along with routing, the authentication checks, rate limiting rules and other policies I configured should also be automatically applied to my application in the new Kubernetes cluster.
-
Transparent application dependencies: During the redeployment of my application onto a different cluster, the services my application depends on remained on the old cluster. My application needs to continue accessing these dependencies via Kubernetes service references even when the rest of my application is on a different cluster. My application users cannot afford to experience any disruption or errors, and the connectivity should be reconciled automatically. A microservices application using service mesh across services will also need to implicitly support multicluster service mesh configuration with a centralized control plane.
-
Load balancing traffic to applications across clusters: I have multiple instances of a critical application running in a multicluster environment for resiliency purposes. I want to use a single URL and global gateway rules to be able to control traffic and load balance across all the instances of the application without having to set up multiple URLs and access rules or use an external DNS service or load balancer. I want the authorization and rate limiting for both ingress and egress to be applied globally across all instances of my application regardless of the cluster they are deployed onto.
Application connectivity considerations
The case for consolidating control plane
While today’s technologies provide an approach to solve the concerns presented, a number of barriers remain that often require developers to understand and manage multiple cloud and cluster control and data planes—and their configurations. From the perspective of application developers and administrators, dealing with deployment and connectivity of applications in a multicloud/multicluster environment presents a number of unique challenges:
-
Developer workflow targets individual clusters: There is no shared abstraction between cluster, multiple clusters, local development, or cloud APIs, leading to one-off approaches to each cluster or cloud provider.
-
No standard API for multicloud connectivity: Admins need to set up policies for each cloud provider and each cluster separately, on different control planes, and using different APIs.
-
Migration between cloud services and clusters: Moving an application to a different cluster or cloud service requires a migration flow. Because many cloud vendors have different APIs and network access setup, targeting a different cloud service may require a separate workflow or migration flow.
-
Global policy management: One needs the ability to configure application-level policies like authorization, rate limiting, mutual trust and service connectivity globally, even if the applications and dependencies are spread across multiple environments.
Thus, in a world where multicloud deployments are preferred to prevent vendor lock-in and increase resiliency of applications, addressing these concerns through either a deployment pipeline or in a development workflow is not an ideal solution.
The current challenge of dealing with network-level concerns through edge gateways and Kubernetes Ingress while using API gateways or service mesh ingress gateways for application and business-level concerns makes it harder for administrators to safely visualize and isolate traffic and access, also making it challenging for developers to self-manage application connectivity.
Seamless or consistent cloud, multicluster and single cluster usage is key for application developers. Having a control plane at the next level up to abstract the complexity of dealing with multiple clusters across different cloud providers greatly simplifies the overall flow. This control plane can centralize policies and enforce boundaries aligned to organizational structure rather than to the physical topology of the deployment platform.
Making it easier to treat clusters as disposable and replaceable without changing developer and admin experiences is far more desirable than dealing with each individual cluster as indispensable and irreplaceable. Instead of reinventing the APIs and control plane for Kubernetes clusters, making the APIs apply at an intermediate level and propagate to individual clusters makes it easier to retain existing workflows and APIs while providing transparency in cluster management. In this scenario, these advantages are unlocked:
-
A developer describes a service exactly once that runs locally, scales globally and is reachable everywhere.
-
A service can be moved from the datacenter to a cloud or edge environment without change or disruption.
-
An admin can add new capability to a cluster, a cloud environment, or a set of clusters in exactly the same way with the same code.
-
An admin can consolidate the data in their organization, make it multicloud-resilient, and define security policy and audit no matter where the data lives.
-
Every workload has applied security policies and can be identified and audited, wherever it runs.
Data plane ingress and policies
Running the “next layer up” of ingress/routing/identity and access management (IAM) is a continuous challenge for customers. This currently requires multiple APIs and proxies from ingress, service mesh and API gateway implementations.
A typical application connection to a service on Kubernetes is routed through a DNS load balancer, an ingress/route, an Istio envoy or API gateway and finally routed to the service. Each proxy adds a layer of processing, increases response time, and introduces one more point of failure to reach the service. Each proxy layer has different permissions, rules and policies to manage and set, each from a different control plane.
Consolidating data plane traffic to flow through a single proxy and a standard way to enforce ingress, mesh and gateway rules into Kubernetes clusters helps ensure consistency and improve performance.
A consolidated data plane can bring together these concerns through a high-level Kubernetes API and extensions as follows:
-
Kubernetes Gateway API—collection of resources to provide connectivity to Kubernetes services. Improvement of ingress:
-
Role based—model organizational roles that use and configure Kubernetes service networking.
-
Expressive—core functionality for things like header-based matching, traffic weighting, etc.
-
Extensible—custom resources to be linked at various layers.
-
-
Service mesh—gateway API builds standardized vendor-neutral implementation including Istio.
-
API gateway—use Istio envoy and extensions for supporting authorization, rate limiting and business/application layer connectivity considerations.
Thus, as we have seen with the aggregation of a central control plane for management, an aggregated Kubernetes API for gateways makes it possible to configure network and application-layer connectivity concerns through a single API, and a single enforcement point on the cluster.
The future of cloud connectivity is application-centric networking, with network details hidden behind abstractions for end users. To help provide stronger security posture and better support modern applications, the foundations of networking must isolate workloads with greater security controls and adopt a zero-trust model between participants that is orchestrated by explicit, top-level application and organizational relationships.
Application connectivity builds on this model to extend it to individual applications, across environments while correctly isolating workloads. This includes application concerns like authorization, rate limiting, service identity and trusted communication. A gateway that can provide a single configuration control plane and unified data plane access is an essential consideration in achieving application connectivity across a hybrid cloud environment.
Hybrid cluster management
Red Hat Advanced Cluster Management for Kubernetes provides a single-pane management of multiple Kubernetes clusters spread across multicloud environments at scale. This provides unified multicluster management, governance and compliance, application lifecycle management and observability in a uniform way across the environment.
Red Hat Hybrid Cloud Console provides the control plane for advanced cluster management. An extension of this functionality with a Kubernetes-compatible control plane can provide transparent multicluster and fleet-wide APIs. Having a single control layer helps to expose all of the functionality to end users to abstract the implementation details and physical boundaries across the environment. Application connectivity concerns can be addressed in a similar fashion.
Hybrid gateway for application connectivity
Hybrid gateway is the union of network, service mesh, API gateway and global ingress. Gateway is not just about north-south traffic to an application from the edge or outside the network, but provides a comprehensive connectivity management solution.
Application connectivity management through global configurations can be achieved using the managed hybrid control plane. These configurations can be propagated to individual clusters, so that traffic and application policies can get to applications on individual clusters and are applied consistently in the data plane. Such an approach can hide the physical boundaries of clouds and clusters and offer a true hybrid cloud connectivity experience.
Applications that span or move clusters need to be reachable and interconnected without interruptions. This includes an ability for:
-
Clients to connect to an application anywhere it needs to be.
-
Organizations to define traffic/security policies at multiple levels.
-
Cloud and on-premise environments working together as needed.
-
Uniform on-premise low-level networking to API management.
Hybrid cloud gateway architecture
This architecture has two main components:
Data plane: It is delivered as an on-cluster component/agent that provides a gateway API via Istio and Envoy extensibility. It consolidates ingress traffic to clusters and addresses application-centric networking concerns by providing a uniform way to get traffic into clusters anywhere, interconnect apps across clusters and provide advanced service capabilities (auth / rate limiting / monitoring / api capabilities) incrementally as developers need it.
The data plane also offers other capabilities necessary for multicluster application interconnectivity and global rate limiting by consolidating, configuring and replacing the existing cluster and service mesh components.
Hybrid control plane: The hybrid control plane (HCP) is conceived as a Kubernetes-based application control plane that spans multiple Kubernetes clusters and cloud environments. It offers unique networking capabilities like movement, capacity awareness and resilience.
Managed by Red Hat with a shared responsibility model, it represents a shared control plane and cluster-agnostic end-user APIs, and is able to gracefully degrade to single cluster operations, as well. The control plane will also orchestrate the appropriate cloud (or non-cloud) infrastructure to enable multicluster and private interconnect via DNS, load balancing, network reachability, VPC, etc., either directly or within customer accounts.
Application connectivity summary
As we have covered in these articles, the unique challenges presented by hybrid cloud environments necessitates a re-imagining of developer experiences. The growth of application deployments and dependencies spanning multiple clusters, cloud environments and on-premise environments leads to increasing complexity in managing connectivity concerns.
Additionally, the necessity to provide transparency in multicluster environments with automatic load balancing, failover across multiple cloud environments, resolving dependencies across clusters and managing application access rules across deployments adds additional layers of complexity.
At Red Hat, we believe hybrid cloud computing dictates a new application connectivity architecture that goes beyond the north-south and east-west view of application traffic. This new approach abstracts Kubernetes cluster and cloud boundaries with global configuration capabilities and a unified management layer.
As a leader in hybrid cloud computing, Red Hat has been driving the adoption of multicloud container platforms, like Red Hat OpenShift, and multiple cloud-management solutions, like Red Hat Advanced Cluster Management, to greatly simplify deployment and management of hybrid cloud environments.
Having vast experience with management, DevOps, automation and security isolation for enterprise customers with Red Hat Enterprise Linux (RHEL), Red Hat OpenShift and Red Hat Advanced Cluster Management, Red Hat is uniquely positioned to address these challenges globally, and not just from an edge, ingress, API management or application service perspective.
The application connectivity conundrum is solvable through the approaches outlined here, using abstraction and centralized control plane to address both administrator and developer concerns. Red Hat’s cloud gateway adds application connectivity to the existing suite of cloud management solutions provided for enterprise customers.
Learn more
About the authors
Satya Jayanti leads product strategy and delivery for application connectivity to containerized multicloud applications. He has over two decades of experience in enterprise integration, Java, middleware and application development. Jayanti is passionate about technology, and loves sharing his knowledge through webinars, workshops and in conferences.
Joe O'Reilly is a technology specialist with over 20 years experience in the software industry. He heads up Cloud Application Services Product Management with a focus on Data streaming & eventing Infrastructure, Application Connectivity and Developer Services.
Bilgin Ibryam is a product manager and a former architect at Red Hat. He is a regular blogger, open source evangelist, speaker, and the author of Camel Design Patterns and co-author of Kubernetes Patterns books. He has over a decade of experience in designing and building scalable and resilient distributed systems. Follow him @bibryam for regular updates on these topics.
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit