Whether your OpenShift cluster(s) are hosted on-premise or in the cloud downtime happens. This could be a temporary outage or it can be an extended outage with no resolution in sight. This article will explain how GitOps can be used for the rapid redeployment of your Kubernetes objects. One important thing to note is that GitOps can only restore Kubernetes objects so that means any persistent data required for an application to correctly function must be restored for stateful applications, such as databases, to be back in service.
Continuing with our usage of Argo CD we will discuss two different ways to start the process in restoring these objects. For both of these procedures we will assume that a new cluster has been deployed and that Argo CD has also been deployed. We also assume that the same OpenShift routes and DNS zones will be used because the OpenShift routes should be stored within git as well.
Using the Argo CD binary you will manually need to define the repositories and Argo CD applications. This process will require you to have a list of repositories and command to define them within Argo CD. For example, we could run the following to restore our simple-app project.
argocd repo add https://github.com/cooktheryan/blogpost
argocd app create --project default \
--name simple-app --repo https://github.com/cooktheryan/blogpost.git \
--path . --dest-server https://kubernetes.default.svc \
-dest-namespace simple-app --revision master
This process works as long as you have a list of all repositories, git branches, and namespaces documented. Once these items are all defined and loaded into Argo CD, the objects will begin to deploy within the cluster and sync with Argo CD.
With some planning we can make this process better though by using git to manage our GitOps resources. Storing a copy of the configmap and the various ArgoCD applications within Git or even something simple as a file or object share that exists outside of the data center hosting the OpenShift cluster will allow us to rapidly redefine the objects managed by Argo CD.
First, let's take a look at the configmap in YAML format. We will see the repositories currently defined within Argo CD.
oc get configmap -n argocd argocd-cm -o yaml
apiVersion: v1
data:
repositories: |
- url: https://github.com/cooktheryan/blogpost
- url: http://github.com/openshift/federation-dev.git
- sshPrivateKeySecret:
key: sshPrivateKey
name: repo-federation-dev-3296805493
url: git@github.com:openshift/federation-dev.git
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"argocd-cm","app.kubernetes.io/part-of":"argocd"},"name":"argocd-cm","namespace":"argocd"}}
creationTimestamp: "2019-09-26T18:46:47Z"
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
name: argocd-cm
namespace: argocd
resourceVersion: "474704"
selfLink: /api/v1/namespaces/argocd/configmaps/argocd-cm
uid: fe331084-e08d-11e9-a49a-52fdfc072182
We will next save the configmap in YAML format.
oc get configmap -n argocd argocd-cm -o yaml --export > argocd-cm.yaml
But what if my repository requires a ssh key? If that is the case then we will need to export
the secret as well. If your repositories do not require a ssh key or authentication then ignore this step.
The configmap identifies the name of the secret that is used by the repository.
oc get secrets -n argocd repo-federation-dev-3296805493 -o yaml > repo-federation-dev-secret.yaml
We will now need to backup any Argo CD applications. This can be done per individual application or by just exporting
all of the applications to a YAML file. For this example since we only have one application within Argo CD.
It would be suggested to store the applications individually and within the same git repository that the Kubernetes
objects are defined so that they will be under revision control and available in the event of a disaster.
oc get applications -o yaml --export > simple-app-backup.yaml
Now that we have all of required Argo CD objects we will now import them into our new server that has been deployed when
the new environment was brought online.
First, we will update the configmap to include our previously defined repositories.
oc apply -f argocd-repos.yaml -n argocd-cm.yaml
OPTIONAL: If credentials were used for any of the repositories then the credentials in the secret must be imported before running a repo list.
oc apply -f repo-federation-dev-secret.yaml -n argocd
Next, we will restore our Argo CD applications which will cause the our Kubernetes objects like namespaces, services, deployments, and routes to deploy
onto the cluster.
oc create -f backup.yaml -n argocd
At this point all of the objects should begin to deploy and the applications within Argo CD should post a healthy state. As with
all backup solutions it makes sense to test this DR procedure frequently. This could be done per application basis on another cluster or with
Code Ready Containers.
In the coming weeks we will publish another disaster recovery post containing information on what to do if your cluster fails and how
a Global Load Balancer can keep the lights on.
저자 소개
유사한 검색 결과
채널별 검색
오토메이션
기술, 팀, 인프라를 위한 IT 자동화 최신 동향
인공지능
고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트
오픈 하이브리드 클라우드
하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요
보안
환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보
엣지 컴퓨팅
엣지에서의 운영을 단순화하는 플랫폼 업데이트
인프라
세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보
애플리케이션
복잡한 애플리케이션에 대한 솔루션 더 보기
오리지널 쇼
엔터프라이즈 기술 분야의 제작자와 리더가 전하는 흥미로운 스토리
제품
- Red Hat Enterprise Linux
- Red Hat OpenShift Enterprise
- Red Hat Ansible Automation Platform
- 클라우드 서비스
- 모든 제품 보기
툴
체험, 구매 & 영업
커뮤니케이션
Red Hat 소개
Red Hat은 Linux, 클라우드, 컨테이너, 쿠버네티스 등을 포함한 글로벌 엔터프라이즈 오픈소스 솔루션 공급업체입니다. Red Hat은 코어 데이터센터에서 네트워크 엣지에 이르기까지 다양한 플랫폼과 환경에서 기업의 업무 편의성을 높여 주는 강화된 기능의 솔루션을 제공합니다.