A successful hybrid cloud architecture is one that addresses how to build, deploy, manage and connect a hybrid mix of applications across a hybrid infrastructure environment. These applications will span multiple infrastructure footprints—across cloud providers and customer datacenters and multiple Kubernetes clusters, as well as systems that run on vendor management systems (VMSs), bare metal and edge environments.
The requirements for application connectivity bring together concepts and technologies that have previously been considered distinct. An ideal hybrid cloud networking solution must address traffic concerns in a unified way, managing the low-level global networking infrastructure and the higher-level application connectivity concerns. Kubernetes and Linux containers provide the foundation for connecting applications that run on that platform to end users, to other application services on the same platform, and to services that run outside of the platform.
But the requirements for more seamless application connectivity in a hybrid cloud environment go beyond that. Application connectivity requires service isolation, authorization, rate limiting and traffic policies that can be configured by the application developers to protect their application.
Building, deploying and managing applications is greatly simplified with Red Hat OpenShift, an enterprise Kubernetes platform. To scale this across clusters and multiple cloud providers, Red Hat Advanced Cluster Management for Kubernetes includes capabilities that unify multicluster management, provide policy-based governance and extend application life cycle management.
In this article, we look at how Red Hat OpenShift, Red Hat OpenShift Service Mesh and Red Hat OpenShift API Management can provide a comprehensive solution for connecting applications across hybrid cloud environments. As hybrid cloud platforms continue to evolve, there are opportunities to provide next-generation application connectivity capabilities for multicluster and multicloud application deployments.
Understanding application connectivity
Applications require connectivity. Whether you’re building a front-end application that gets used directly by end users via a user interface or application programming interface (API), or one of the multitude of back-end services that support those user-facing applications, it is important to provide more reliable connectivity throughout.
Key things to consider when considering application connectivity in a hybrid cloud environment include:
Connecting application services to users: How do you make your applications available to end users and manage access to those applications in a more secure fashion, while delivering a great user experience and meeting the needs of the business?
Connecting services to other services: How do you connect all of the back-end services that support your application and deliver higher levels for the security and performance of those connections, while dealing with increasingly distributed application environments?
Connecting and consuming third-party services: How do you connect your applications to services from leading third-party cloud service providers, without limiting where your application can run or restricting innovation to a single provider?
Addressing these questions is key to delivering a great application experience.
Connectivity requires more than Kubernetes
While Kubernetes provides a platform for orchestrating and managing cloud-native applications, application connectivity requires additional capabilities.
Connecting Kubernetes applications to end users
In a Kubernetes cloud-native environment, the notion of an “application” is loosely defined. Applications may consist of one or more Kubernetes services, where each service is a proxy that fronts one or more pods running application instances in containers. Regardless of how many pods or services make up your application, ultimately you want to help users access that application.
Kubernetes Ingress exposes routes from outside the cluster to services within the cluster, supporting a north-south traffic pattern. Ingress integrates with your domain name system (DNS) to give services externally-reachable URLs, to load balance traffic, to terminate secure sockets layer/transport layer security (SSL/TLS), and to offer name-based virtual hosting. Red Hat OpenShift supports standard Kubernetes Ingress load balancing as well as Red Hat OpenShift Routes, which was an earlier implementation of the same concept.
In addition to network-layer accessibility of an application endpoint, clients also require an application-layer contract in the form of an API. This API contract needs to be discoverable by clients outside the Kubernetes cluster and to support self-service registration for authorized access and greater security functionality via OpenID Connect and OAuth.
When application APIs are part of a customer-facing product, detailed usage analytics and monetization are vital to measure business impact and charge for consumption. Red Hat OpenShift API Management, powered by Red Hat 3scale API Management, provides these capabilities as both a managed service offering and on-premise software solution.
Connecting Kubernetes services to other Kubernetes services
Typically a front-end application is a service deployed on Kubernetes and exposed to end users via Ingress. That front-end service typically needs to connect to other services running on Kubernetes—often to many of them—to do anything useful. While Kubernetes services help you connect to those pods and Kubernetes manages the health of pod instances, you will need more than Kubernetes to have reliable application interactions.
Every Kubernetes cluster will require a networking solution to manage the actual connectivity of services in this east-west traffic pattern. The Kubernetes Container Networking Interface (CNI) allows users to connect their choice of software-defined networking (SDN) options. Red Hat OpenShift includes a default Red Hat OpenShift SDN, while also helping users to take advantage of third-party certified SDN options.
But in a distributed microservices-based application architecture, developers and operations teams will often need to go further to enhance the security posture of service-to-service communications, diagnose issues and manage the rollout of new services. Red Hat OpenShift Service Mesh provides a uniform way to connect, manage and observe microservices-based applications.
Microservices architectures tend to form around business domain boundaries in an organization, which introduces an additional set of considerations beyond connecting individual microservices together. This connection across applications and domain boundaries requires many of the same capabilities used in north-south traffic, including self-service onboarding, usage tracking, rate limiting and custom policies. Unlike north-south, this traffic remains on the internal network of the Kubernetes cluster and should be transparent to the application’s API clients and providers. A merge of API management capabilities with service mesh provides the best of both worlds.
Connecting Kubernetes services to external services
Most Kubernetes applications will also need to use services that run outside of your Kubernetes cluster(s) to function. That could be a database or enterprise resource planning (ERP) system in your datacenter, a Software-as-a-Service (SaaS) application, or a native service of your public cloud provider that adds key functionality to your applications.
While Kubernetes helps you connect your application to any service running on or off cluster, it’s important to make it easier for developers to find the services they need and quickly connect them into their applications. Red Hat OpenShift Service Mesh provides an API registry which acts as a catalog of available services inside and outside the cluster. Service bindings allow endpoints and secure credentials to be mapped directly into application pods to ease development burden. Finally, a forward API proxy allows external traffic flows to external API endpoints to have the same controls as incoming traffic, such as rate limiting and metering access to a paid external API across all internal applications in order to control expenses and allow chargeback.
Managing application connectivity
The standard separation of concerns for modern application platforms are generally:
Control plane: A management interface for configuring the connectivity policies. This could include the Red Hat OpenShift Management web console, Kiali service mesh manager, Red Hat OpenShift API Management admin console, and CLI/APIs. A control plane may be deployed either in the same cluster with other services, or as a managed control plane outside the cluster.
Data plane: Network access to the service for request/response traffic. This could include intermediary layers like Kubernetes Ingress, Red Hat OpenShift Routes, Proxy, API gateway and Istio ingress gateways. Data planes are deployed distinct from the control plane, and typically on the cluster to be close to all related backend services.
Today, a comprehensive application connectivity solution on Kubernetes requires a layered approach incorporating Red Hat OpenShift Service Mesh and Red Hat OpenShift API Management technologies. Kubenetes as the foundational container management layer enables low-level networking, ingress and routing to the deployed application containers. The other management technologies layered on top of Kubernetes rely on Kubernetes for solving the ingress and provide enriched application layer capabilities.
In the resulting architecture, Kubernetes provides the networking and ingress functionality, Red Hat OpenShift Service Mesh provides the advanced east-west and intra-service connectivity policies, and Red Hat OpenShift API Management provides north-south gateway policies. Combined, these solutions also have ingress capabilities for controlled access to external services.
Both service mesh and API gateway provide access control to the service endpoints. Service mesh usually is involved at the L4/L7 layer providing rate limiting, routing rules, mutual TLS, service identity, chaos engineering and ingress rules. API gateway provides security capabilities at the L7 layer with rate limiting, authentication and authorization.
Network connectivity considerations
Each Kubernetes deployment comprises a distinct cluster, which consists of a control plane and a set of worker nodes for running containerized applications. Applications are packaged as container images and placed onto worker nodes in a Kubernetes cluster.
Broadly, Kubernetes provides access control at the infrastructure level (L1-L4)—aspects around user and group permissions, isolation, encryption, ingress/SSL termination, secret and key management, continuous integration/continuous deployment (CI/CD) pipeline security and build/image validation.
Service and application connectivity considerations
Red Hat OpenShift Service Mesh extends Kubernetes, establishing programmable, application-aware network policies. These policies can be configured through a common control plane without changing application code. Using the powerful Envoy service proxy and sidecars, these policies are administered to the application pod and enforce the policies in the data plane. Working with both Kubernetes and traditional workloads, OpenShift Service Mesh provides:
Traffic management: controls traffic flow and API calls between services, makes calls more reliable, and makes the network more robust in the face of adverse conditions.
Service identity and security: provides services in the mesh with a verifiable identity and provides the ability to protect service traffic as it flows over networks of varying degrees of trustworthiness.
Policy enforcement: applies organizational policy to the interaction between services, enables access policies to be enforced and that resources are fairly distributed among consumers. Policy changes are made by configuring the mesh, not by changing application code.
Telemetry: gains understanding of the dependencies between services and the nature and flow of traffic between them, providing the ability to quickly identify issues.
Business connectivity considerations
APIs provide an interface into your business capabilities, allowing consumers to access services in a controlled manner. As a business-focused construct, APIs serve as contracts between service providers and service consumers. In the context of microservices and cloud native development, an API can be defined as a Kubernetes service packaged with a formal API contract to allow access across business domains or by external consumers.
Red Hat OpenShift API Management provides business policies and security considerations to the services in a Kubernetes cluster, including:
API rate limiting: Limits the number of requests reaching the API by enforcing restrictions per API URL Path method using configured user/account plan limits.
Authentication: Provides a mechanism to uniquely identify the requester, and only allows access to authenticated accounts. This authentication can happen based on the identity of the requester. OpenShift API Management supports the use of an API (user) Key, App ID and App Key combination or with OpenID Connect (OIDC) authentication based on OAuth 2.0.
Authorization: Provides a way to control user/account access based on their role. Beyond authentication, this access looks into the user's profile to decide if the user or group could have access to the resource requested. This is configured in OpenShift API Management by assigning users and accounts to specific plans. More fine-grained access control can be provided for OIDC-secured services by inspecting the JSON Web Token (JWT) shared by the identity provider and applying role check policies.
Application connectivity with Red Hat
Red Hat provides a comprehensive solution stack to provide application connectivity for distributed services running in a Kubernetes environment.
Red Hat OpenShift
Although Kubernetes as a standalone open source project is an effective container management tool, its full potential as a hybrid cloud platform for enterprises is only realized by integrating an ecosystem of complementary cloud-native tools.
Red Hat OpenShift is an enterprise Kubernetes platform for hybrid cloud environments focused on developer experience and application security that's platform agnostic. The OpenShift ecosystem includes powerful tools for developer environments, application services, software-defined networking, storage, monitoring, third-party integrations, virtualization, security capabilities and cluster management.
OpenShift provides a consistent application platform for the management of existing, modernized and cloud-native applications that runs on any cloud environment, and a common abstraction layer across any infrastructure.
Red Hat OpenShift Service Mesh
Red Hat includes service mesh capabilities within OpenShift, installed via an OpenShift operator for simpler deployment. Based on a set of open source projects, Red Hat OpenShift Service Mesh brings together multiple open source technologies to provide a unified control plane for configuration, observability and management. It includes:
Istio: An open source project for integrating and managing traffic between services.
Jaeger: An open, distributed tracing system that tracks requests as they move between services.
Kiali: An open source project for viewing configurations, monitoring traffic and analyzing traces.
Envoy: An open source project edge environment and service proxy, a universal data plane for traffic to flow between services.
OpenShift Service Mesh supports federation with multiple service meshes across the same cluster or multiple clusters. Istio gateways provide traffic management to and from the mesh through standalone envoy proxies. Unlike Kubernetes Ingress, Istio gateways configure layer 4-6 load balancing properties such as ports to expose, TLS settings and more to bind to a virtual service. This lets you manage gateway traffic and application layer policies like any other data plane traffic in an Istio mesh.
Red Hat OpenShift API Management
Red Hat OpenShift API Management is available as a managed cloud service on OpenShift. OpenShift API Management is based on the open source 3scale API Management and Keycloak projects. Using an API business model, OpenShift API Management provides the following:
Deploys, monitors and controls APIs throughout their entire life cycle
Creates policies governing environment security and usage
Uses existing identity management systems through a declarative policy without requiring custom code
Gains insight into health and use of APIs
Discovers and shares APIs by publishing to internal or external developer portals
OpenShift API Management provides a unified control plane for configuring and managing APIs, which can be deployed on one or multiple clusters. OpenShift API Management provides a set of NGINX-based gateways that could be deployed alongside the API endpoints to provide data plane proxying and policy enforcement of consumer traffic to services.
To work better with a service mesh microservices architecture, OpenShift API Management provides a WebAssembly (Wasm) extension to inject 3scale API Management configurations to service mesh.The 3scale API Management Wasm extension allows for connecting to the OpenShift API Management control plane, defines API services and policies and uses the 3scale API Management extension to allow for data plane traffic that authorizes and reports traffic to OpenShift API Management.
Applications running in a Kubernetes environment require a comprehensive connectivity solution that understands network and application concerns. Use of microservices architecture, API-centric development and deployments across diverse cloud and datacenter infrastructure require intra-service and service communications outside of Kubernetes clusters. Thus, technology stacks should manage all-around connectivity concerns—north/south and east/west.
OpenShift API Management and OpenShift Service Mesh provide comprehensive application connectivity on the Kubernetes platform. OpenShift, OpenShift Service Mesh, and OpenShift API Management are well-integrated, leading platforms to provide the right isolation of concerns and comprehensive connectivity for developers in their cloud native applications.
The borders between clusters and cloud environments are blurring, with application connectivity spanning multiple network and application boundaries. We need to address application connectivity in a transparent manner across multicluster and cloud environments.
Building on the foundations of current technology, how do you drive seamless connectivity to applications moving across multiple cloud environments and clusters, while adapting existing application access control and networking rules, resolving service dependencies and preserving auditability and observability? To simplify connectivity for developers, declarative policies and configuration could be applied by the platform without changing application code.
In our next article, we look at the evolution of the technology stack to allow application connectivity management in a multicloud and multicluster environment.
About the authors
Satya Jayanti leads product strategy and delivery for application connectivity to containerized multicloud applications. He has over two decades of experience in enterprise integration, Java, middleware and application development. Jayanti is passionate about technology, and loves sharing his knowledge through webinars, workshops and in conferences.
Joe O'Reilly is a technology specialist with over 20 years experience in the software industry. He heads up Cloud Application Services Product Management with a focus on Data streaming & eventing Infrastructure, Application Connectivity and Developer Services.
Bilgin Ibryam is a product manager and a former architect at Red Hat. He is a regular blogger, open source evangelist, speaker, and the author of Camel Design Patterns and co-author of Kubernetes Patterns books. He has over a decade of experience in designing and building scalable and resilient distributed systems. Follow him @bibryam for regular updates on these topics.