Subscribe to the feed

OpenShift API for Data Protection (OADP) enables backup, restore, and disaster recovery of applications on an OpenShift cluster. Data that can be protected with OADP include Kubernetes resource objects, persistent volumes, and internal images. The OpenShift API for Data Protection (OADP) is designed to protect Application Workloads on a single OpenShift cluster.

Red Hat OpenShift® Data Foundation is software-defined storage for containers. Engineered as the data and storage services platform for Red Hat OpenShift, Red Hat OpenShift Data Foundation helps teams develop and deploy applications quickly and efficiently across clouds.

The terms Project and namespace maybe used interchangeably in this guide.

Pre-requisites

Installing OpenShift Data Foundation Operator

We will be using OpenShift Data Foundation to simplify application deployment across cloud providers which will be covered in the next section.

  1. Open the OpenShift Web Console by navigating to the url below, make sure you are in Administrator view, not Developer.

    oc get route console -n openshift-console -ojsonpath="{.spec.host}"

    Authenticate with your credentials if necessary.

  1. Navigate to OperatorHub, search for and install OpenShift Data Foundation

    odfInstall

Creating StorageSystem

ODFfinishedInstall

  1. Click Create StorageSystem button after the install is completed (turns blue).
  2. Go to Product Documentation for Red Hat OpenShift Data Foundation 4.9
    1. Filter Category by Deploying
    2. Open deployment documentation your cloud provider.
    3. Follow Creating an OpenShift Data Foundation cluster instructions.

Verify OpenShift Data Foundation Operator installation

You can validate the successful deployment of OpenShift Data Foundationn cluster following Verifying OpenShift Data Foundation deployment in the previous deployment documentation or with the following command:

oc get storagecluster -n openshift-storage ocs-storagecluster -o jsonpath='{.status.phase}{"\n"}'

And for the Multi-Cloud Gateway (MCG):

oc get noobaa -n openshift-storage noobaa -o jsonpath='{.status.phase}{"\n"}'

Creating Object Bucket Claim

Object Bucket Claim creates a persistent storage bucket for Velero to store backed up kubernetes manifests.

  1. Navigate to Storage > Object Bucket Claim and click Create Object Bucket Claim  ObjectBucketClaimCreate  Note the Project you are currently in. You can create a new Project or leave as default

  2. set the following values:

    • ObjectBucketClaim Name: oadp-bucket
    • StorageClass: openshift-storage.noobaa.io
    • BucketClass: noobaa-default-bucket-class

    ObjectBucketClaimFields

  3. Click Create

    ObjectBucketClaimReady  When the Status is Bound, the bucket is ready.

Gathering information from Object Bucket

  1. Gathering bucket name and host

    • Using OpenShift CLI:

      • Get bucket name

        oc get configmap oadp-bucket -n default -o jsonpath='{.data.BUCKET_NAME}{"\n"}'
      • Get bucket host

        oc get configmap oadp-bucket -n default -o jsonpath='{.data.BUCKET_HOST}{"\n"}'
    • Using OpenShift Web Console:

      1. Click on Object Bucket obc-default-oadp-bucket and select YAML view

        obc-default-oadp-bucket  Take note of the following information which may differ from the guide:

        • .spec.endpoint.bucketName. Seen in my screenshot as oadp-bucket-c21e8d02-4d0b-4d19-a295-cecbf247f51f
        • .spec.endpoint.bucketHost: Seen in my screenshot as s3.openshift-storage.svc
  2. Gather oadp-bucket secret

    • Using OpenShift CLI:
      1. Get AWS_ACCESS_KEY
      oc get secret oadp-bucket -n default -o jsonpath='{.data.AWS_ACCESS_KEY_ID}{"\n"}' | base64 -d
      1. Get AWS_SECRET_ACCESS_KEY
      oc get secret oadp-bucket -n default -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}{"\n"}' | base64 -d
    • Using OpenShift Web Console
      1. Navigate to Storage > Object Bucket Claim > oadp-bucket. Ensure you are in the same Project used to create oadp-bucket.
      2. Click on oadp-secret in the bottom left to view bucket secrets
      3. Click Reveal values to see the bucket secret values. Copy data from AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and save it as we'll need it later when installing the OADP Operator.

    Note: regardless of the cloud provider, the secret field names seen here may contain AWS_*.

  3. Now you should have the following information:

    • bucket name
    • bucket host
    • AWS_ACCESS_KEY_ID
    • AWS_SECRET_ACCESS_KEY

Deploying an application

Since we are using OpenShift Data Foundation, we can use common application definitions across cloud providers regardless of available storage class.

Clone our demo apps repository and enter the cloned repository.

git clone https://github.com/kaovilai/mig-demo-apps --single-branch -b oadp-blog-rocketchat
cd mig-demo-apps

Apply rocket chat manifests.

oc apply -f apps/rocket-chat/manifests/

Navigate to rocket-chat setup wizard url obtained by this command into your browser.

oc get route rocket-chat -n rocket-chat -ojsonpath="{.spec.host}"

Enter your setup information. remember it as we may need it later.

Skip to step 4, select "Keep standalone", and "continue".

Press "Go to your workspace"

readyToUse

"Enter"

Go to Channel #general and type some message

firstMessage

Installing OpenShift API for Data Protection Operator

You can install the OADP Operator from the Openshift's OperatorHub. You can search for the operator using keywords such as oadp or velero.

OADP-OLM-1

Now click on Install

Finally, click on Install again. This will create Project openshift-adp if it does not exist, and install the OADP operator in it.

Create credentials secret for OADP Operator to use

We will now create secret cloud-credentials using values obtained from Object Bucket Claim in Project openshift-adp.

From OpenShift Web Console side bar navigate to Workloads > Secrets and click Create > Key/value secret  secretKeyValCreate

Fill out the following fields:

  • Secret name: cloud-credentials
  • Key: cloud
  • Value:
    • Replace the values with your own values from earlier steps and enter it in the value field.
      [default]
      aws_access_key_id=<INSERT_VALUE>
      aws_secret_access_key=<INSERT_VALUE>
      Note: Do not use quotes while putting values in place of INSERT_VALUE Placeholders

secretKeyValFields

Create the DataProtectionApplication Custom Resource

From side bars navigate to Operators > Installed Operators

Create an instance of the DataProtectionApplication (DPA) CR by clicking on Create Instance as highlighted below:

dpa-cr

Select Configure via: YAML view

Finally, copy the values provided below and update fields with comments with information obtained earlier.

  • update .spec.backupLocations[0].objectStorage.bucket with bucket name from earlier steps.
  • update .spec.backupLocations[0].config.s3Url with bucket host from earlier steps.
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: example-dpa
namespace: openshift-adp
spec:
configuration:
velero:
featureFlags:
- EnableCSI
defaultPlugins:
- openshift
- aws
- csi
backupLocations:
- velero:
default: true
provider: aws
credential:
name: cloud-credentials
key: cloud
objectStorage:
bucket: "oadp-bucket-c21e8d02-4d0b-4d19-a295-cecbf247f51f" #update this
prefix: velero
config:
profile: default
region: "localstorage"
s3ForcePathStyle: "true"
s3Url: "http://s3.openshift-storage.svc/" #update this if necessary

create-dpa-cr-yaml

The object storage we are using is an S3 compatible storage provided by OpenShift Data Foundation. We are using custom s3Url capability of the aws velero plugin to access OpenShift Data Foundation local endpoint in velero.

Click Create

Verify install

To verify all of the correct resources have been created, the following command

oc get all -n openshift-adp

results should look similar to:

NAME                                                     READY   STATUS    RESTARTS   AGE
pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s
pod/oadp-velero-sample-1-aws-registry-5d6968cbdd-d5w9k 1/1 Running 0 95s
pod/velero-588db7f655-n842v 1/1 Running 0 95s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s
service/oadp-velero-sample-1-aws-registry-svc ClusterIP 172.30.130.230 <none> 5000/TCP 95s

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/restic 3 3 3 3 3 <none> 96s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s
deployment.apps/oadp-velero-sample-1-aws-registry 1/1 1 1 96s
deployment.apps/velero 1/1 1 1 96s

NAME DESIRED CURRENT READY AGE
replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s
replicaset.apps/oadp-velero-sample-1-aws-registry-5d6968cbdd 1 1 1 96s
replicaset.apps/velero-588db7f655 1 1 1 96s

Modifying VolumeSnapshotClass

Setting a DeletionPolicy of Retain on the VolumeSnapshotClass will preserve the volume snapshot in the storage system for the lifetime of the Velero backup and will prevent the deletion of the volume snapshot, in the storage system, in the event of a disaster where the namespace with the VolumeSnapshot object may be lost.

The Velero CSI plugin, to backup CSI backed PVCs, will choose the VolumeSnapshotClass in the cluster that has the same driver name and also has the velero.io/csi-volumesnapshot-class: "true" label set on it.

  • Using OpenShift CLI

    oc patch volumesnapshotclass ocs-storagecluster-rbdplugin-snapclass --type=merge -p '{"deletionPolicy": "Retain"}'
    oc label volumesnapshotclass ocs-storagecluster-rbdplugin-snapclass velero.io/csi-volumesnapshot-class="true"
  • Using OpenShift Web Console

    Navigate to Storage > VolumeSnapshotClasses and click ocs-storagecluster-rbdplugin-snapclass

    Click YAML view to modify values deletionPolicy and labels as shown below:

      apiVersion: snapshot.storage.k8s.io/v1
    - deletionPolicy: Delete
    + deletionPolicy: Retain
    driver: openshift-storage.rbd.csi.ceph.com
    kind: VolumeSnapshotClass
    metadata:
    name: ocs-storagecluster-rbdplugin-snapclass
    + labels:
    + velero.io/csi-volumesnapshot-class: "true"

Backup application

From side menu, navigate to Operators > Installed Operators Under Project openshift-adp, click on OADP Operator. Under Provided APIs > Backup, click on Create instance

backupCreateInstance

In IncludedNamespaces, add rocket-chat

backupRocketChat

Click Create.

The status of backup should eventually show Phase: Completed

Uhh what? Disasters?

Someone forgot their breakfast and their brain is deprived of minerals. They proceeded to delete the rocket-chat namespace.

Navigate to Home > Projects > rocket-chat  deleteRocketChat

Confirm deletion by typing rocket-chat and click Delete.

Wait until Project rocket-chat is deleted.

Rocket Chat application URL should no longer work.

Restore application

An eternity of time has passed.

You finally had breakfast and your brain is working again. Realizing the chat application is down, you decided to restore it.

From side menu, navigate to Operators > Installed Operators Under Project openshift-adp, click on OADP Operator. Under Provided APIs > Restore, click on Create instance  createRestoreInstance

Under Backup Name, type backup

In IncludedNamespaces, add rocket-chat check restorePVs

restoreRocketChat

Click Create.

The status of restore should eventually show Phase: Completed.

After a few minutes, you should see the chat application up and running. You can check via Workloads > Pods > Project: rocket-chat and see the following  rocketChatReady

Try to access the chat application via URL:

oc get route rocket-chat -n rocket-chat -ojsonpath="{.spec.host}"

Check previous message exists.  firstMessage (1)

Conclusion

Phew.. what a ride. We have covered the basic usage of OpenShift API for Data Protection (OADP) Operator, Velero, and OpenShift Data Foundation.

Protect your data with OADP, so you can rest easy restoration is possible whenever your team forgets their breakfast.

Remove workloads from this guide

oc delete ns openshift-adp rocket-chat openshift-storage

If openshift-storage Project is stuck, follow troubleshooting guide.

If you have set velero alias per this guide, you can remove it by running the following command:

unalias velero

About the author

UI_Icon-Red_Hat-Close-A-Black-RGB

Browse by channel

automation icon

Automation

The latest on IT automation for tech, teams, and environments

AI icon

Artificial intelligence

Updates on the platforms that free customers to run AI workloads anywhere

open hybrid cloud icon

Open hybrid cloud

Explore how we build a more flexible future with hybrid cloud

security icon

Security

The latest on how we reduce risks across environments and technologies

edge icon

Edge computing

Updates on the platforms that simplify operations at the edge

Infrastructure icon

Infrastructure

The latest on the world’s leading enterprise Linux platform

application development icon

Applications

Inside our solutions to the toughest application challenges

Original series icon

Original shows

Entertaining stories from the makers and leaders in enterprise tech