Using Object Storage
OpenShift leverages the Kubernetes persistent volume framework and allows you to provision persistent storage using networked storage. This allows Administrators to provide storage for data that needs to persist, regardless of the state of the pod. For example, a database spun up in OpenShift; you can use an NFS backend storage to save the data.
You can also use storage to save information from an application that takes things like file uploads.
But what if I want my application to run in ANY OpenShift cluster, independent of backend storage?
I can use Object Storage to provide my backend storage and I don't need to have a persistent volume mapped to my application. This also serves another purpose of being able to deploy my applicaiton in ANY OpenShift environmet and still have the data available; regardless of where it's hosted.
Using Secrets
The Secret object type in Kubernetes lets you hold sensitive information such as passwords, OpenShift client config files, dockercfg files, private source repository credentials, etc. by abstracting sensitive content from the pods that use it. This can be used in a veriety of ways; but it can be mounted into containers using as a volume and then used as a config file the applicaion reads from.
I have created a simple "uploader" application that uses S3 storage. https://github.com/christianh814/php-object-store
If you take a look at the index.php file, you'll see in the source code that I am using the file_get_contents function in order populate the variables needed to connect to S3 (you can see these on lines 211 and 212).
if (!defined('awsAccessKey')) define('awsAccessKey', file_get_contents('/etc/secret/aws-access-key'));
if (!defined('awsSecretKey')) define('awsSecretKey', file_get_contents('/etc/secret/aws-secret-key'));
I will create a new secret that will mount a volume and populate the files with the entries I upload.
NOTE: I am going to assume that you have an ec2-creds file somewhere with the right environment variables set up.
Deploying And Application With Secrets
First; I will create the application how I would create any other application normally
$ oc new-app openshift/php~https://github.com/christianh814/php-object-store.git --name=uploader
Now I will source my ec2-creds file that has the keys to access my AWS account.
$ source ~/ec2-creds
Next, I will create some temp files that has this information.
$ echo -n $AWS_ACCESS_KEY_ID > /tmp/aws-access-key
$ echo -n $AWS_SECRET_ACCESS_KEY > /tmp/aws-secret-key
I will use these files to upload them to OpenShift as a secret
$ oc secrets new s3secret aws-access-key=/tmp/aws-access-key aws-secret-key=/tmp/aws-secret-key
You can view your secret to verify (NOTE: The values shown are the values in base64)
$ oc get secret s3secret -o json
{
"kind": "Secret",
"apiVersion": "v1",
"metadata": {
"name": "s3secret",
"namespace": "demo",
"selfLink": "/api/v1/namespaces/demo/secrets/s3secret",
"uid": "de6397d7-05a8-11e6-a690-5254001539d9",
"resourceVersion": "27064",
"creationTimestamp": "2016-04-18T21:02:32Z"
},
"data": {
"aws-access-key": "XXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"aws-secret-key": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
},
"type": "Opaque"
}
Next, we need to add the secret to the default serviceAccount so that it can be used
$ oc secrets add serviceaccounts/default secrets/s3secret
This application expects the secrets to be mounted on /etc/secret so add your secret config to that mountpoint.
$ oc volume dc/uploader --add --type=secret --secret-name=s3secret -m /etc/secret
This volume addition to the DeploymentConfig will trigger a deployment of the pod.
If you check the pod; you'll see the secret volume mounted
$ oc get pods
NAME READY STATUS RESTARTS AGE
uploader-1-build 0/1 Completed 0 34m
uploader-2-9ndre 1/1 Running 0 4m$ oc rsh uploader-2-9ndre
bash-4.2$ df -h /etc/secret
Filesystem Size Used Avail Use% Mounted on
tmpfs 920M 8.0K 920M 1% /etc/secret
bash-4.2$ ls -l /etc/secret/
total 8
-r--r--r--. 1 root root 20 Apr 18 17:19 aws-access-key
-r--r--r--. 1 root root 40 Apr 18 17:19 aws-secret-key
Now everything I upload will now appear on my S3 bucket storage account. Additionally, Anywhere I deploy this application; the files will appear in the list. This makes it to where I can deploy this application to an OpenShift cluster in different regions in the US and users will get the same experence no matter which pod/region the users lands on.
Summary
In this blog you saw how you can use secrets in order to upload sensitive data inside of OpenShift. Also, you saw how you can use that to create an application that uploads files into an Object Storage system.
Sull'autore
Christian is a well-rounded technologist with experience in infrastructure engineering, system administration, enterprise architecture, tech support, advocacy, and product management. He is passionate about open source and containerizing the world one application at a time. He is currently a maintainer of the OpenGitOps project, a maintainer of the Argo project, and works as a Technical Marketing Engineer and Tech Lead at Cisco. He focuses on GitOps practices, DevOps, Kubernetes, network security, and containers.
Altri risultati simili a questo
Red Hat to acquire Chatterbox Labs: Frequently Asked Questions
Key considerations for 2026 planning: Insights from IDC
Edge computing covered and diced | Technically Speaking
Ricerca per canale
Automazione
Novità sull'automazione IT di tecnologie, team e ambienti
Intelligenza artificiale
Aggiornamenti sulle piattaforme che consentono alle aziende di eseguire carichi di lavoro IA ovunque
Hybrid cloud open source
Scopri come affrontare il futuro in modo più agile grazie al cloud ibrido
Sicurezza
Le ultime novità sulle nostre soluzioni per ridurre i rischi nelle tecnologie e negli ambienti
Edge computing
Aggiornamenti sulle piattaforme che semplificano l'operatività edge
Infrastruttura
Le ultime novità sulla piattaforma Linux aziendale leader a livello mondiale
Applicazioni
Approfondimenti sulle nostre soluzioni alle sfide applicative più difficili
Virtualizzazione
Il futuro della virtualizzazione negli ambienti aziendali per i carichi di lavoro on premise o nel cloud