In this post:
Understand how outputs from existing pipelines can be used to feed another pipeline for the validation of a new platform release against existing applications and CNFs.
Learn how end-to-end testing regimes are used to generate a baseline and metric for service providers to assess performance of existing applications and CNFs with a new platform version.
Find out how to achieve the continuous adoption of new releases of the platform while maintaining the stability of deployed services.
In two previous posts, I discussed the use of pipelines for cloud-native network functions (CNFs):
In this article, I discuss how to use the outputs from the previous pipelines and combine them to achieve automation, consistency and reliability of Day 2 operations at scale.
A pipeline can be used to validate other pipelines, in the context of proving operational readiness, and use within a service provider’s production environment. I will use the two previous pipelines as examples:
A new Red Hat OpenShift version accepted by the lifecycle management (LCM) pipelines.
Deployment of various applications and CNFs within the service provider environment as a result of the onboarding pipelines.
When a new version of OpenShift has been accepted by the service provider’s lifecycle management pipelines, the end-to-end combination of applications and CNFs that have been accepted by the service provider’s onboarding pipelines need to be tested and validated. This is achieved using multi-tenant end-to-end integration pipelines, as depicted below. This pipeline illustrates the concept and is not intended to represent any final configuration or definition of this type of pipeline.
Once a new OpenShift cluster version is identified as accepted, all the applications and CNFs that must work together are identified (A). This serves as the input for an ephemeral cluster with all the configurations validated by the lifecycle management pipeline (B). All the applications and CNFs that share a cluster or a multi-tenant cluster in the service provider’s production environment are onboarded into the ephemeral cluster. This pipeline (B) validates that there are no conflicts among the configurations or custom resource definitions (CRDs).
Once validated, cross-tenant automated functional testing takes care of identifying compatibility among the applications or CNFs that are expected to work together. The pipeline (B) then executes an end-to-end scalability test and generates a baseline for the OpenShift release with the specific combination of applications and CNFs. This baseline serves as a comparison point between existing version combinations and the new version. It helps the service provider maintain a metric to compare improvement or degradation among versions and combinations of applications and CNFs.
With scalability validated, the new cluster version and combination of applications and CNFs are ready for production (C) and the service provider can set the deployment of any future OpenShift cluster to use this new version.
The multi-tenant end-to-end integration pipelines allow the service provider to move to the continuous adoption of new releases of the platform while maintaining the stability of deployed services. When combining the types of pipelines described in this three-part series, the service provider benefits from the automation, consistency and reliability of a modern process while maintaining the availability and stability of the services provided to its end customers.
These pipelines serve as gatekeeping processes for service provider production environments. When combined with the GitOps operational model, benefits can be extended to Day 2 operations with granular auditability and control, with the output of the pipelines described brought into production environments.
Closing remarks and where Red Hat can help
In this three-part series I have discussed how the use of pipelines can achieve automation, greater consistency and reliability of a telecommunication service provider process. These processes include Infrastructure as Code (IaC), development and operations (DevOps), development, security and operations (DevSecOps), network operations (NetOps) and GitOps.
In part one, I discussed the use of pipelines to onboard applications and the benefit of a digital twin to mitigate the risks of software deployment and to better meet compatibility and compliance requirements of existing service provider platforms. The digital twin concept can be achieved using the OpenShift hosted control plane capability, where a dedicated cluster is used to onboard applications and CNFs.
In part two, I discussed the use of pipelines for lifecycle management and how they facilitate the frequent and more reliable deployment and upgrade of a service provider’s infrastructure or platform while checking if the software adheres to their governance policies. Red Hat Advanced Cluster Security for Kubernetes helps to safeguard both applications and the underlying infrastructure with built-in security enforcement that reduces operational risk.
In this post, I have discussed how the output of the pipelines described in part one and two can be used to feed a new pipeline. This pipeline validates a particular OpenShift release against a specific set of applications and CNFs that have been onboarded to allow service providers to compare performance between different versions.
Red Hat OpenShift simplifies and accelerates the delivery and lifecycle management of applications and CNFs consistently across any cloud environment, and supports continuous innovation and speed for application delivery at scale. With Red Hat OpenShift Pipelines and Tekton, service providers benefit from a CI/CD experience through tight integration with Red Hat OpenShift and Red Hat developer tools, with each step scaling independently to meet the demands of the pipeline.