Author: Ryan Cook (rcook@redhat.com)
Is it too late to integrate GitOps?
By: Ryan Cook
The idiom “missed the boat” can be used to describe the loss of an opportunity or a chance to do something. With OpenShift, the excitement to use this new and cool product immediately may create your own “missed the boat” moment in regards to managing and maintaining deployments, routes, and other OpenShift objects but what if the opportunity isn’t completely gone?
Continuing with our series on GitOps, the following article will walk through the process of migrating an application and its resources that were created manually to a process in which a GitOps tool manages the assets. To help us understand the process we will manually deploy a httpd application. Using the steps below we will create a namespace, deployment, and service and expose the service which will create a route.
oc create -f https://raw.githubusercontent.com/openshift/federation-dev/master/labs/lab-4-assets/namespace.yaml
oc create -f https://raw.githubusercontent.com/openshift/federation-dev/master/labs/lab-4-assets/deployment.yaml
oc create -f https://raw.githubusercontent.com/openshift/federation-dev/master/labs/lab-4-assets/service.yaml
oc expose svc/httpd -n simple-app
We start with our sample application managed manually, then bring it under GitOps control in a way which ensures the application remains available.
- Define a repository for the code
- Export our current objects and load into git
- Select and deploy a GitOps tool
- Add the repository to our GitOps tool
- Define the application in our GitOps tool
- Perform a dry run of the object using the GitOps tool
- Perform sync of the objects using the GitOps tool
- Enable pruning and auto-syncing of the objects
As we have stated in the previous GitOps articles, when using GitOps the git repository is the source of truth for all of your objects within your Kubernetes cluster(s). We will assume that a git repository service is currently being used within your organization. This git repository can be public or private but the repository must be accessible by the Kubernetes clusters. The repository can be the same one as where the application code exists or a separate repository can be used specifically for deployments. It is suggested that the repository has strict permissions as secrets, routes, and other objects need to be stored.
For this exercise a new public repository can be created on GitHub. The repository can be named whatever you like but for this example we will use the name blogpost for our repository.
If the YAML files for the objects have not previously been stored in git or locally then oc
or kubectl
binary can help us out. Below we will request the YAML for our namespace, deployment, service, and route. Clone the newly created repository and cd
into the directory.
oc get namespace simple-app -o yaml --export > namespace.yaml
oc get deployment httpd -o yaml -n simple-app --export > deployment.yaml
oc get service httpd -o yaml -n simple-app --export > service.yaml
oc get route httpd -o yaml -n simple-app --export > route.yaml
Make the following modification to the deployment.yaml to remove a field in which Argo CD cannot sync properly.
sed -i '/\sgeneration: .*/d' deployment.yaml
We also must modify the route. We will first set a multiline variable and then we will replace ingress: null with the contents of the variable.
export ROUTE=" ingress:\\
- conditions:\\
- status: 'True'\\
type: Admitted"
sed -i "s/ ingress: null/$ROUTE/g" route.yaml
Once we have these files it is time to save them into the git repository. From this point forward, the repository should be the source of truth for anything and manual changes to any of the objects should be prohibited.
git commit -am ‘initial commit of objects’
git push origin master
We will assume that ArgoCD has already been deployed based on this blog post. So we will add the newly created repository to Argo CD that contains the simple-app code. Ensure that the repository below matches the one that was created in previous steps.
argocd repo add https://github.com/cooktheryan/blogpost
Next, create the app. The app defines the values for the GitOps tool to know the repository and path to use, the OpenShift cluster to manage the objects, the specific branch of the repository, and whether to sync assets automatically or not.
argocd app create --project default \
--name simple-app --repo https://github.com/cooktheryan/blogpost.git \
--path . --dest-server https://kubernetes.default.svc \
--dest-namespace simple-app --revision master --sync-policy none
Once the application has been defined in Argo CD the tool will begin to verify the current objects deployed objects versus those defined in the repository. We have the sync policy currently disabled and pruning is not enabled so no items should be changed at this point. One thing that you will notice is that the application within Argo CD UI will be stated as “Out of Sync” this is due to a missing label that ArgoCD supplies. This label will not cause any assets to be redeployed when we run the sync.
Now let’s run a dry run to ensure no errors exist within our files.
argocd app sync simple-app --dry-run
If no errors show up during the dry run we can move forward with the sync.
argocd app sync simple-app
When running the command argocd get
on our simple-app application we should see that the application is “Healthy” and “Synced”. This means that all of the resources in our git repository now match those that are deployed.
argocd app get simple-app
Name: simple-app
Project: default
Server: https://kubernetes.default.svc
Namespace: simple-app
URL: https://argocd-server-route-argocd.apps.example.com/applications/simple-app
Repo: https://github.com/cooktheryan/blogpost.git
Target: master
Path: .
Sync Policy: <none>
Sync Status: Synced to master (60e1678)
Health Status: Healthy
...
At this point we can enable “auto-sync” and “pruning” to ensure that nothing in manually created and that any time an object is updated and pushed to the repository it will be deployed.
argocd app set simple-app --sync-policy automated --auto-prune
Now you have successfully migrated an application that did not initially use GitOps to a GitOps managed application.
关于作者
Red Hatter since 2018, technology historian and founder of The Museum of Art and Digital Entertainment. Two decades of journalism mixed with technology expertise, storytelling and oodles of computing experience from inception to ewaste recycling. I have taught or had my work used in classes at USF, SFSU, AAU, UC Law Hastings and Harvard Law.
I have worked with the EFF, Stanford, MIT, and Archive.org to brief the US Copyright Office and change US copyright law. We won multiple exemptions to the DMCA, accepted and implemented by the Librarian of Congress. My writings have appeared in Wired, Bloomberg, Make Magazine, SD Times, The Austin American Statesman, The Atlanta Journal Constitution and many other outlets.
I have been written about by the Wall Street Journal, The Washington Post, Wired and The Atlantic. I have been called "The Gertrude Stein of Video Games," an honor I accept, as I live less than a mile from her childhood home in Oakland, CA. I was project lead on the first successful institutional preservation and rebooting of the first massively multiplayer game, Habitat, for the C64, from 1986: https://neohabitat.org . I've consulted and collaborated with the NY MOMA, the Oakland Museum of California, Cisco, Semtech, Twilio, Game Developers Conference, NGNX, the Anti-Defamation League, the Library of Congress and the Oakland Public Library System on projects, contracts, and exhibitions.
产品
工具
试用购买与出售
沟通
关于红帽
我们是世界领先的企业开源解决方案供应商,提供包括 Linux、云、容器和 Kubernetes。我们致力于提供经过安全强化的解决方案,从核心数据中心到网络边缘,让企业能够更轻松地跨平台和环境运营。