Argo Workflows is an open source project that is container-native and uses Kubernetes to run its workflow steps. Argo enables users to create a multi-step workflow that can orchestrate parallel jobs and capture the dependencies between tasks. The framework allows for parameterization and conditional execution, passing values between steps, timeouts, retry logic, recursion, flow control, and looping.
HashiCorp Vault is a secrets management tool specifically designed to control access to sensitive credentials in a low-trust environment. Vault provides a unified interface to any secret while providing tight access control and recording a detailed audit log.
Argo gives you a convenient way to access Red Hat OpenShift secrets, but what if your company uses Vault instead? I'll walk you through how to do this and package it up into a Helm chart for easy installation and reuse.
Installation
Installing Argo and Vault is fairly simple. Argo has an install YAML file that includes necessary resources to run Argo. Check out GitHub for additional information.
Vault has a Helm chart for installation. This guide will walk you through the installation via Helm chart and help you set up what’s needed to access Vault from Red Hat OpenShift. It’s also a good idea to look at this tutorial that walks you through injecting secrets via Vault Sidecar.
The use of the Vault Agent Sidecar Injector allows containers within the pod to consume Vault secrets without being Vault aware. It does so by altering pod specifications to include Vault Agent containers that render Vault secrets to a shared memory volume. This approach works especially well for secrets that are dynamic and frequently updated.
Vault Integration
Git Input Artifact
Argo Workflows have a very convenient feature to easily get source code from Git, called GitArtifact. GitArtifact allows for basic auth or SSH private key access. If you have your credentials stored in an OpenShift secret, you could do the following:
templates:
- name: git-clone
inputs:
artifacts:
- name: argo-source
path: "{{workflow.parameters.git-repo-path}}"
git:
repo: "{{workflow.parameters.git-repo-url}}"
revision: "{{workflow.parameters.git-repo-revision}}"
usernameSecret:
name: "{{workflow.parameters.git-secret-name}}"
key: username
passwordSecret:
name: "{{workflow.parameters.git-secret-name}}"
key: password
container:
image: alpine/git
command: [sh, -c]
args: ["git status && ls && cat VERSION"]
workingDir: "{{workflow.parameters.git-repo-path}}"
However, what if your credentials are stored in Vault, and you cannot use basic auth or SSH?
Argo Git Step
Since Argo does not have built-in support for Vault, we cannot use the GitArtifact described above. Lucky for us, we can take advantage of Vault’s ability to mount files on the pod filesystem. If your Vault secret is just a Git token, you could do the following:
templates:
- name: git-clone
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "git-creds"
vault.hashicorp.com/secret-volume-path: "/home"
vault.hashicorp.com/agent-inject-secret-token: "secret/git-creds"
vault.hashicorp.com/agent-inject-template-token: |
{{- with secret "secret/git-creds" -}}
{{ .Data.data.token }}
{{- end }}
container:
image: alpine/git
command: [sh, -c]
args: ["git clone -b {{workflow.parameters.git-repo-revision}} https://`cat /home/token`@{{workflow.parameters.git-repo-url}} . && git status && ls && cat README.md
"]
workingDir: /src
Here we write the Git token from Vault to the `/home/token` file and cat the file out when cloning the repo. This provides Git with the necessary token to access the private repository.
VolumeClaimTemplates
Argo workflows can define VolumeClaimTemplates. This is a list of claims that containers are allowed to reference. The workflow controller will create the claims at the beginning of the workflow and delete the claims upon completing the workflow. If you need multiple steps to access your source without having to mount it each time, you can use a VolumeClaimTemplate. We define our claim in the Workflow.spec like this:
volumeClaimTemplates:
- metadata:
name: workdir
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
Our Argo step to clone the repo needs to add the `volumeMount` like this:
templates:
- name: git-clone
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "git-creds"
vault.hashicorp.com/secret-volume-path: "/home"
vault.hashicorp.com/agent-inject-secret-token: "secret/git-creds"
vault.hashicorp.com/agent-inject-template-token: |
{{- with secret "secret/git-creds" -}}
{{ .Data.data.token }}
{{- end }}
container:
image: alpine/git
command: [sh, -c]
args: ["git clone -b {{workflow.parameters.git-repo-revision}} https://`cat /home/token`@{{workflow.parameters.git-repo-url}} ."]
workingDir: "/gen-source{{workflow.parameters.git-repo-path}}"
volumeMounts:
- name: workdir
mountPath: /gen-source
Any other subsequent step in need of the source code would include the same `volumeMounts` section and have access from the pod to the source code at `/gen-source/src`.
Vault Secret as Output Parameter
We can use the same Vault secret injection as an output parameter concept as an input parameter to another step. What if, in a later step, we wanted to push a change to our Git repository? If we use the first example where we created the token file, we could change the `container.args` file as follows:
container:
args: ["cat /home/token"]
Now you can reference the output of this step as `task.[TASK_NAME].outputs.result`.
Helm
Since the Argo workflow is just a CustomResource, we can easily create a Helm chart to define, install and upgrade our workflows. Argo uses Helm-like templating for its parameters. Workflow parameters can be defined within your steps, such as the following:
spec:
arguments:
parameters:
- name: best-football-team
value: Steelers
- name: favorite-drink
value: whiskey
When defining your step templates, you could set envs based on these workflow parameters like this:
- name: my-step
container:
env:
- name: FOOTBALL_TEAM
value: "{{workflow.parameters.best-football-team}}"
- name: DRINK
value: "{{workflow.parameters.favorite-drink}}"
If we want to turn this into a Helm chart, we would do the following:
spec:
arguments:
parameters:
- name: best-football-team
value: "{{ tpl (required "value 'bestFootballTeam' required" .Values.bestFootballTeam }}"
- name: favorite-drink
value: "{{ tpl (required "value 'favoriteDrink' required" .Values.favoriteDrink }}"
templates:
- name: my-step
container:
env:
- name: FOOTBALL_TEAM
value: '{{ printf "{{workflow.parameters.best-football-team}}" }}'
- name: DRINK
value: '{{ printf "{{workflow.parameters.favorite-drink}}" }}'
Notice the use of the Helm string function printf. Since Argo utilizes the double curly bracket just like Helm as a template directive, we need to "escape" Argo's parameters. If we did not escape them, Helm would attempt to inject a value, resulting in an empty string instead of the intended Argo parameter. There are other ways to escape the double curly braces, but I find the printf function to be the cleanest solution.
Summary
Unfortunately, Argo does not have support for Vault. Fortunately, it's fairly easy to utilize Vault's Agent Sidecar Injector to inject secrets into your Argo workflow. Helm makes it really easy to parameterize your workflows for reuse. Just don't forget to escape any Argo parameters!
Complete Argo workflows and installation steps can be found in the argo-workflow-vault-integration repo.
About the author
More like this
Looking ahead to 2026: Red Hat’s view across the hybrid cloud
Red Hat to acquire Chatterbox Labs: Frequently Asked Questions
What Is Product Security? | Compiler
Technically Speaking | Security for the AI supply chain
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Virtualization
The future of enterprise virtualization for your workloads on-premise or across clouds