In this post:
-
Understand how pipelines can be used to automate infrastructure or platform lifecycle management within a service provider’s environment
-
Learn how pipelines play a key role in delivering frequent and reliable software deployment and upgrades for smoother day 2 operations
-
Find out how new versions of vendor software can be obtained and validated for use based on a service provider’s current configuration
-
Read how the Red Hat OpenShift-hosted control plane capability is used to facilitate software testing
In Pipelines for cloud-native network functions (CNFs) Part 1: Pipelines for onboarding CNFs, modern telecommunications processes including infrastructure as code (IaC), development and operations (DevOps), development, security and operations (DevSecOps), network operations (NetOps) and GitOps use pipelines to achieve automation, consistency and reliability of a process.
In this article, I discuss how pipelines can be used for lifecycle management (LCM) of software components of an infrastructure or platform. The key objective is to achieve more frequent and reliable deployments and upgrades of the associated infrastructure or platform. This helps to accelerate the adoption of new software that matches a service provider’s requirements. The lack of service provider adoption of this type of pipeline is one reason why day 2 operations of cloud-native deployments are often challenging.
Pipelines for LCM
Design begins with the definition of stages to obtain new versions of software components. The level of design complexity depends on the particular requirements of the software vendor, ranging from a simple script to an advanced automation workflow. The automated workflow would interact with APIs provided by the vendor to obtain the software and mirror it locally within the service provider’s environment.
As an example, I will describe the concept of a pipeline for identifying available Red Hat OpenShift releases. The pipeline has to validate whether a specific release is valid for deployment on the current service provider’s configuration. The pipeline’s goal is to achieve reliable upgrades of air-gapped OpenShift clusters (e.g. clusters with no connection to the internet or that have restrictions for accessing online vendor registries).
As depicted below—using OpenShift 4.7 as a baseline —an OpenShift release is supported for up to 18 months. During that time, micro versions or "z-releases" are provided with security patches and bug fixes.
Pipeline design has to consider as many combinations as the service provider may choose to have. For this example, and as depicted below, I am only considering a simple pipeline to be used for micro updates, or for upgrades from one OpenShift release to the next.
There is considerable overlap between these pipelines and the pipelines for onboarding CNFs that were identified in part 1. The initial pipeline (A) is responsible for mirroring a new OpenShift version to the service provider’s registry. Once mirrored, this pipeline creates a new candidate version to be used over subsequent pipelines.
Best practice dictates that software artifacts provided by any vendor, or those generated outside of the service provider, should be scanned and security checked. The next pipeline (B) is used to provide a baseline for comparison, or can be used for audit purposes. The service provider’s vetting process is where the actual validation and verification work happens (C and D). To understand these steps a different representation for these pipelines is depicted below:
First an ephemeral cluster is created. As I discussed in part 1, the OpenShift hosted control plane capabilities is an ideal approach for the instantiation of an ephemeral cluster to serve this particular purpose.
There are differences between the initial stages of these pipelines. One pipeline (C) is responsible for deploying the new OpenShift candidate version into the ephemeral cluster. The other pipeline (D) executes an upgrade procedure on the ephemeral cluster that has the previously accepted version. After these initial stages, the remaining pipeline stages proceed to execute the following steps:
-
Validation to better adhere to the service provider’s policies
-
A defined functional and integration test
-
A load test for the creation of a baseline associated with the new OpenShift candidate version
When all the tests and validations are successful, the ephemeral cluster is removed and resources are made available for future use.
The next pipeline (E) tags and labels the software components as a baseline for new clusters defined in the environment. In many service provider environments, this final stage mirrors the software artifact into internal repositories where it can be consumed by other specialized pipelines.
How Red Hat can help
Lifecycle management enables more frequent and reliable deployment and upgrades of the associated infrastructure or platforms. The use of pipelines to deploy and validate software releases and any incremental versions—also used to check if the software adheres to a service provider’s policies—are important for consistency, reliability and faster time-to-market.
Service providers that use the Red Hat OpenShift will benefit from simplified workflows that can be used to optimize their operational model and reduce their total cost of ownership (TCO).
The adoption of pipelines can be achieved with Red Hat OpenShift Pipelines, a Kubernetes-native CI/CD solution based on Tekton. It builds on Tekton to provide a CI/CD experience through tight integration with Red Hat OpenShift and Red Hat developer tools. Red Hat OpenShift Pipelines is designed to run each step of the CI/CD pipeline in its own container, allowing each step to scale independently to meet the demands of the pipeline.
To implement best practices and adherence to a service provider’s own security requirements, Red Hat Advanced Cluster Security for Kubernetes provides built-in controls for security policy enforcement across the entire software development cycle, helping reduce operational risk and increase developer productivity. Safeguarding cloud-native applications and their underlying infrastructure prevents operational and scalability issues and helps organizations keep pace with increasingly rapid release schedules.
In my next post I will cover pipelines for multitenant end-to-end integrations and how they are used in conjunction with the pipelines described here to capture incompatibility and other issues before they are adopted into a service provider’s production environment.
저자 소개
William is a Product Manager in Red Hat's AI Business Unit and is a seasoned professional and inventor at the forefront of artificial intelligence. With expertise spanning high-performance computing, enterprise platforms, data science, and machine learning, William has a track record of introducing cutting-edge technologies across diverse markets. He now leverages this comprehensive background to drive innovative solutions in generative AI, addressing complex customer challenges in this emerging field. Beyond his professional role, William volunteers as a mentor to social entrepreneurs, guiding them in developing responsible AI-enabled products and services. He is also an active participant in the Cloud Native Computing Foundation (CNCF) community, contributing to the advancement of cloud native technologies.
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
오리지널 쇼
엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리
제품
- Red Hat Enterprise Linux
- Red Hat OpenShift Enterprise
- Red Hat Ansible Automation Platform
- 클라우드 서비스
- 모든 제품 보기
툴
체험, 구매 & 영업
커뮤니케이션
Red Hat 소개
Red Hat은 Linux, 클라우드, 컨테이너, 쿠버네티스 등을 포함한 글로벌 엔터프라이즈 오픈소스 솔루션 공급업체입니다. Red Hat은 코어 데이터센터에서 네트워크 엣지에 이르기까지 다양한 플랫폼과 환경에서 기업의 업무 편의성을 높여 주는 강화된 기능의 솔루션을 제공합니다.