Typically, when a new technology or pattern emerges, various approaches are taken to determine how it best fits into a transformed model. Proponents of traditional methods are sometimes initially met with resistance by eager early adopters, who may discard lessons learned from foundational practices.
Established orchestration approaches versus GitOps methodologies demonstrate this inflection point.
While it would be an oversimplification to assume that one approach entirely can replace the other, we’ll examine the strengths and weaknesses of each in this post and find a balance between the two.
Closed-loop orchestration
Traditional closed-loop orchestration refers to the automation of the complete life cycle of infrastructure, platforms, and applications by deploying, monitoring, and correcting issues as they are detected throughout the lifetime of the managed element.
This orchestration solution continuously receives events, alarms, metrics, and logs from the orchestrated elements—and all data sources are analyzed by specialized controller elements and compared to the predefined policies. The policy controller identifies and targets policy violations for remediation action, which triggers the corrective cycle. The remedial action based on variance from the desired state completes the feedback loop.
These concepts are similar to capabilities native to Kubernetes because it implements the closed-loop orchestration for containers. Kubernetes controllers (replication, endpoint, etc.), which possess the knowledge to remediate deviations, are control loops with awareness of the “desired state” and the “current state.”
Policies and configurations
Traditional orchestrators assume they are the source of truth, governance, and enforcement for configuration policies regarding artifacts (e.g., application components, infrastructure elements, and platform stacks) configurations.
In contrast, Kubernetes challenges the assumptions and role of the traditional orchestrator. The etcd database is the source of truth of the configuration and desired state of any artifact defined on/for the platform. Kubernetes controllers maintain a closed-loop cycle in which events and metrics are analyzed to drive remediation tasks to remedy deviations.
These two models might not seem compatible. There cannot be two different sources of truth, nor two closed-loop remediation cycles acting over the same artifacts. However, these two models complement each as illustrated:
When examining the closed-loop “incompatibilities” between a traditional orchestrator and Kubernetes, the principal differentiators are the source of truth for configurations and policies, and who acts over the managed artifacts.
Kubernetes orchestration is a domain-specific closed-loop orchestrator for containers, pods, microservices, artifacts configurations, but still has dependencies on external ancillary services (e.g., DNS, CA, DHCP servers, artifact repositories, container registries, network infrastructure configurations, etc.).
Traditional closed-loop orchestration handles the preparation of infrastructure and ancillary services (and the definition of the Kubernetes cluster). The role of the orchestrator performed on the Kubernetes platform deployment is directly related to the Kubernetes platform and associated tools.
Kubernetes-native capabilities replace many tasks that were traditionally expected from the external orchestrator. When Kubernetes Operators are used for the platform and plugins, the platform, including the operating system (OS), is managed through the Kubernetes-specialized controllers.
The same is the case for OpenShift with the OpenShift Cluster Operators, Kubernetes Operators for Applications, and Kubernetes Operators for Plugins and platform extensions. In OpenShift, the source of truth for the desired state and configuration of the OS, platform, applications, and Kubernetes artifacts is maintained in the platform etcd database.
In this instance, a valid deployment could set new desired states or configurations and have the external closed-loop orchestrator interacting directly with the Kubernetes and OpenShift application programming interface (API) to modify artifact configurations. However, in such a scenario where artifacts or configuration could be modified by the orchestrator or even by intelligence in the Kubernetes Operators, what holds the single source of truth if the cluster needs to be rebuilt?
This is where the GitOps model comes in.
The GitOps Model
The GitOps Model uses a Git repository as the single source of truth representing the desired state of the infrastructure, platform, and applications. This definitive repository is used by continuous delivery pipelines on closed-loop orchestration and automation flows to automatically synchronize the “current state” with the “desired state.”
GitOps controllers are tools that receive or detect updates notifications from changes in Git, interpreting them as the policies for the “desired state” and controlling the managed artifacts. The GitOps Model contains a high level of flexibility, applicable in many instances and configurations. For example:
-
Infrastructure - used as Orchestrator-driven triggers of Continuous Delivery (CD) pipelines or invoking workflows in a tool like Red Hat Ansible Automation Platform.
-
Within the Red Hat OpenShift Container Platform - the GitOps controller can be a tool like ArgoCD (part of the OpenShift GitOps Operator). When multiple OpenShift clusters are being managed with the GitOps pattern, Red Hat Advanced Cluster Management for Kubernetes has native GitOps capabilities integration with GitOps channels from Git, Helm release registry, or object storage repositories.
With the domain-specific specialization of GitOps controllers, the need for coordination between multiple GitOps became apparent. This evolved into the GitOps app-of-apps pattern, where a parent GitOps controller is used to deploy and manage multiple instances of GitOps controllers, each controller focusing on a particular task or domain.
With this GitOps app-of-apps pattern, supported by tools like ArgoCD and Red Hat Advanced Cluster Management, powerful combinations are possible. For example, using Red Hat Advanced Cluster Management Application resources (RHACM App) in app-of-apps configurations consisting of a mix of RHACM App of RHACM Apps, RHACM App of ArgoCD Apps, or other nested setups such as RHACM App of Helm Apps. A more recent evolution of the apps-of-apps pattern is the ApplicationSet, which extends the concept to multi-cluster configurations.
The evolving role of the orchestrator
What role does the traditional closed-loop orchestrator have in a GitOps-driven operation? The traditional orchestrator needs to become a composable GitOps controller for closed-loop automation applied to infrastructure and ancillary services. For platform configurations, the orchestrator just updates the Git repository and the corresponding GitOps controller takes care of reconciling the “current state” and its remediation to the “desired state.”
Instead of using proprietary formats in a database, in an ideal scenario, the external closed-loop orchestrator will also use the Git repository as the single source of truth for version control of the configuration and artifact definitions.
In real-world practice, the actualization of an ideal state will require integrating the traditional database into GitOps-driven operations. From here, the external orchestrator is required to write the artifact definition and configuration into the Git repository, which enables the corresponding GitOps controller to consume it and execute the needed reconciliation cycles. For the Orchestrator to track “desired state” progress or status, it must (when the app-of-apps pattern or ApplicationSets are in use) consult the API exposed by the GitOps controller or the parent GitOps controllers.
As illustrated, the adoption of the GitOps models evolves upon the role of the traditional closed-loop orchestrator. It uses Git to communicate the desired state and Git also becomes the versioned single source of truth for the definition of artifacts and desired state. The Orchestrator then consumes APIs from the GitOps controllers to track the status and progress of the desired state of an artifact.
In the event of catastrophic failure, a new cluster can be created and the GitOps controllers will manage to bring it to a desired good state of the previous cluster. Because artifacts definitions and configurations are contained with version control, there is an exact record of the previously used (and working) configurations. Restoration or “rollback” requires only the importation of the correct version of the configuration or artifact definition from Git.
Closing statements and looking ahead
As with any new technology, cultural resistance will be present from advocates of traditional approaches and a tendency to discard past lessons from enthusiastic adopters of novel approaches. It is up to the platform and orchestration providers to discover the right balance between the traditional and new patterns.
The operational models of Kubernetes and Kubernetes Operators promote the use of granular and specialized controllers. Within this consideration, a natural evolution is the use of the GitOps methodology to decouple and version control the artifacts definition and configuration from the cluster hosting them. If a cluster needs to be replaced, a single source of truth exists to seamlessly replicate a cluster with configuration and artifacts definitions identical to the cluster it is replacing.
GitOps methodologies and principles are evolving as customers apply the pattern to new fields. The industry and communities are actively working to consolidate principles, develop tools, and document best practices. The Cloud Native Computing Foundation (CNCF) GitOps working group is one community fostering collaboration among companies and organizations. Within the telecom industry, the Telecom Infra Project is exploring similar principles within closed-loop automation and orchestration for Core networks and Open RAN deployments.
Red Hat is an active member of these communities and collaborates with the ecosystem towards consolidation and cross-pollination of related work.
Learn more about a portion of the Red Hat product portfolio supporting GitOps methodologies, including Red Hat Advanced Cluster Management for Kubernetes, Red Hat OpenShift for GitOps, and the Red Hat OpenShift Container Platform.
저자 소개
William is a Product Manager in Red Hat's AI Business Unit and is a seasoned professional and inventor at the forefront of artificial intelligence. With expertise spanning high-performance computing, enterprise platforms, data science, and machine learning, William has a track record of introducing cutting-edge technologies across diverse markets. He now leverages this comprehensive background to drive innovative solutions in generative AI, addressing complex customer challenges in this emerging field. Beyond his professional role, William volunteers as a mentor to social entrepreneurs, guiding them in developing responsible AI-enabled products and services. He is also an active participant in the Cloud Native Computing Foundation (CNCF) community, contributing to the advancement of cloud native technologies.
유사한 검색 결과
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
오리지널 쇼
엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리
제품
- Red Hat Enterprise Linux
- Red Hat OpenShift Enterprise
- Red Hat Ansible Automation Platform
- 클라우드 서비스
- 모든 제품 보기
툴
체험, 구매 & 영업
커뮤니케이션
Red Hat 소개
Red Hat은 Linux, 클라우드, 컨테이너, 쿠버네티스 등을 포함한 글로벌 엔터프라이즈 오픈소스 솔루션 공급업체입니다. Red Hat은 코어 데이터센터에서 네트워크 엣지에 이르기까지 다양한 플랫폼과 환경에서 기업의 업무 편의성을 높여 주는 강화된 기능의 솔루션을 제공합니다.