In the previous post we took a look at the evolution of Software Defined Networking (SDN) and the role it plays for communication service providers. We explored all the way up to the virtualization of network infrastructure, OpenStack, Open vSwitch (OVS) and more. In this post we're going to look at networking, containers and container orchestration.
Containers are not new. Depending on how you define "containers" they can date back to 2008 for LXC, or even further back if you consider things like Solaris Zones or good old-fashioned chroots a container. Containers are a way of isolating processes and applications from the rest of the system, therefore they are "contained" by a number of mechanisms we won't try to go into in this post. "What's a Linux container?" is a good read if you want to know more about the history, the technologies and the state of the art.
Yes, we said paradigm
The short of it is that containers provide a different paradigm than a virtual machine: the former runs on the host operating system—and is completely isolated and very lightweight—whereas the latter runs its own operating system.
At first, Docker made the consumption and management of containerized applications easy for developers. The standardization around containers started with an easier way to build, consume and run applications inside containers. It did not, however, scale well for larger multi-container applications and scenarios with hundreds or thousands of containers in production.
Then Kubernetes entered the picture about five years ago. It has helped reshape how many organizations build, manage and operate software at scale.
Kubernetes made container orchestration and management "trivial." Kubernetes solves the automation, scaling and management of containerized applications in a fully declarative manner. You define what you want, and Kubernetes will work to make it reality (recall the notion of state in SDN).
At a very high level, it's a platform abstracting network, compute and storage, making it easier for developers to simply deploy their application without caring about the underlying plumbing that will make it accessible, resilient, secure, and stateful when required.
Some in the telco industry see Kubernetes as a way to deliver the promise of NFV, with Kubernetes as the next-gen platform that streamlines the management and life cycle of its network function. The ability to have a vendor-agnostic platform, which is gaining a lot of momentum in the industry, is a potential solution to achieve some of the promise of NFV: having a common/shared infrastructure to host network functions.
But the network equipment vendors now need to make their applications cloud native, which brings a whole new level of challenge for them as it is the equivalent of a complete rewrite of their software stack that is used in their traditional, custom-build, hardware.
In addition to its orchestration capability, Kubernetes saw the rise of a "new" paradigm to manage its underlying network: the service mesh.
A service mesh is a dedicated infrastructure layer for making service-to-service communication safer, faster, and more reliable. It is built on the separation of the control plane and the forwarding plane, which is the methodology used in SDN, with one main difference: SDN is at L1/L3 of the Open Systems Interconnection (OSI) model, whereas service mesh operates at L7.
In most of its implementations, service mesh installs a proxy in front of every pod in an application namespace, thus handing off all network control to the mesh. It also has the central control plane configuring that proxy to alter the behavior of the underlying networking, based on a declarative configuration. The service mesh has a full inventory of the services within its realm, making it easier for an application to connect to a discovered service.
Service mesh has proven to be very convenient for developers to manage their application deployment (e.g., A/B testing, circuit breaker, traffic duplication, etc.). Developers do not have to worry about authentication, authorization, security of APIs, for example.
This becomes part of the service mesh configuration, the underlying Role-based Access Control (RBAC) provided by the platform, and the certificate management capabilities. These convenient features can help drastically reduce the complexity that exists in application delivery, enabling developers to focus only on the value-add their application delivers.
What service mesh can do for telco
One of the reasons the core, mobile and access networks are so complex is due to the number of variations they need to support, from vendors to services, SLA, QoS, QoE, resiliency, etc. Plus, they have a lot of legacy equipment and services they have to support.
Integrating a service mesh that provides an overall service registry, and extracts all of the management of services, security, QoS, etc. in a declarative manner at L7 is very appealing for communication service providers.
The underlying network could be streamlined to become less feature rich. It will only have to provide connectivity. The number of features configured in the network elements could be drastically reduced, as provided through the service mesh at the application layer. This is moving the complexity up the chain of the OSI model, but making it simpler to manage and operate overall.
The need for declarative interoperable network programmability and automation is common in both SDN and service mesh deployments.
As the telecommunication industry is currently in the quest of 5G and edge computing, orchestration and control of the network is still a very hot topic; the infrastructure is now either in the public cloud, in the global or local telco datacenters, in the telco Point of Presence, at the Radio Station, or at the customer premise.
In addition to the technologies that keep evolving, there are a lot of challenges ahead, from both a performance and capability standpoint, as well as with support and operation. As the overall infrastructure stack complexifies, the operations must be simplified.
One element of the success is to have a robust overall hybrid cloud foundation and platform that can standardize and integrate connectivity, security, application development, operations and management across the overall infrastructure footprint.
At Red Hat, we strive to be the open hybrid cloud leader, and are committed to provide the horizontal platform to integrate your overall cloud footprint. Find out more about our overall strategy here.
In order to best get you started and build a strong hybrid cloud and Kubernetes architecture, don’t hesitate to check out our hybrid cloud and Kubernetes e-book that will help arm you with industry proven best practices.
About the author
Alexis de Talhouët is a Senior Solutions Architect at Red Hat, supporting North America telecommunication companies. He has extensive experience in the areas of software architecture and development, release engineering and deployment strategy, hybrid multi-cloud governance, and network automation and orchestration.