> Note: This example application deployment assumes you already have an OpenShift test cluster or have followed the instructions in the Deploying OpenShift Container Storage 4 to OpenShift 4 Blog to set up an OpenShift Container Platform 4.2.14+ cluster using OpenShift Container Storage 4.

In this Blog the ocs-storagecluster-ceph-rbd storage class will be used by an OCP application + database deployment to create RWO (ReadWriteOnce) persistent storage. The persistent storage will be a Ceph RBD (RADOS Block Device) volume (object) in the Ceph pool ocs-storagecluster-cephblockpool.
To do so we have created a template file, based on the OpenShift rails-pgsql-persistent template, that includes an extra parameter STORAGE_CLASS that enables the end user to specify the storage class the Persistent Volume Claim (PVC) should use. Feel free to download https://raw.githubusercontent.com/red-hat-storage/ocs-training/master/o… to check on the format of this template. Search for STORAGE_CLASS in the downloaded content.
oc new-project my-database-app

curl <a href="https://raw.githubusercontent.com/red-hat-storage/ocs-training/master/ocp4ocs4/configurable-rails-app.yaml">https://raw.githubusercontent.com/red-hat-storage/ocs-training/master/ocp4ocs4/configurable-rails-app.yaml</a> | oc new-app -p STORAGE_CLASS=ocs-storagecluster-ceph-rbd -p VOLUME_CAPACITY=5Gi -f -

After the deployment is started you can monitor with these commands.
oc status
oc get pvc -n my-database-app
This step could take 5 or more minutes. Wait until there are 2 Pods in Running STATUS and 4 Pods in Completed STATUS as shown below.
watch oc get pods -n my-database-app
Example output:

NAME READY STATUS RESTARTS AGE
postgresql-1-deploy 0/1 Completed 0 5m48s
postgresql-1-lf7qt 1/1 Running 0 5m40s
rails-pgsql-persistent-1-build 0/1 Completed 0 5m49s
rails-pgsql-persistent-1-deploy 0/1 Completed 0 3m36s
rails-pgsql-persistent-1-hook-pre 0/1 Completed 0 3m28s
rails-pgsql-persistent-1-pjh6q 1/1 Running 0 3m14s

You can exit by pressing Ctrl+C
Once the deployment is complete you can now test the application and the persistent storage on Ceph. Your HOST/PORT will be different.
oc get route -n my-database-app

Example output:

NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
rails-pgsql-persistent rails-pgsql-persistent-my-database-app.apps.cluster-a26e.sandbox449.opentlc.com rails-pgsql-persistent

Copy your rails-pgsql-persistent route (different than above) to a browser window to create articles. You will need to append /articles to the end.
Example http:///articles
Enter the username and password below to create articles and comments. The articles and comments are saved in a PostgreSQL database which stores its table spaces on the Ceph RBD volume provisioned using the ocs-storagecluster-ceph-rbd storageclass during the application deployment.
username: openshift
password: secret
Lets now take another look at the Ceph ocs-storagecluster-cephblockpool created by the ocs-storagecluster-ceph-rbd Storage Class. Log into the toolbox pod again.
TOOLS_POD=$(oc get pods -n openshift-storage -l app=rook-ceph-tools -o name)
oc rsh -n openshift-storage $TOOLS_POD
Run the same Ceph commands as before the application deployment and compare to results in prior section. Notice the number of objects in ocs-storagecluster-cephblockpool has increased. The third command lists RBDs and we should now have two RBDs.

ceph df
rados df
rbd -p ocs-storagecluster-cephblockpool ls | grep vol

You can exit the toolbox by either pressing Ctrl+D or by executing exit.

Matching PVs to RBDs

A handy way to match persistent volumes to Ceph RBDs is to execute:

oc get pv -o 'custom-columns=NAME:.spec.claimRef.name,PVNAME:.metadata.name,STORAGECLASS:.spec.storageClassName,VOLUMEHANDLE:.spec.csi.volumeHandle'

Example output:

NAME PVNAME STORAGECLASS VOLUMEHANDLE
ocs-deviceset-0-0-gzxjb pvc-1cf104d2-2033-11ea-ac56-0a9ccb4b29e2 gp2
ocs-deviceset-1-0-s87xm pvc-1cf33c42-2033-11ea-ac56-0a9ccb4b29e2 gp2
ocs-deviceset-2-0-zcjk4 pvc-1cf4f825-2033-11ea-ac56-0a9ccb4b29e2 gp2
db-noobaa-core-0 pvc-3008e684-2033-11ea-a83b-065b3ec3da7c ocs-storagecluster-ceph-rbd 0001-0011-openshift-storage-0000000000000001-3c0bb177-2033-11ea-9396-0a580a800406
postgresql pvc-4ca89d3d-2060-11ea-9a42-02dfa51cba90 ocs-storagecluster-ceph-rbd 0001-0011-openshift-storage-0000000000000001-4cbba393-2060-11ea-9396-0a580a800406
rook-ceph-mon-a pvc-cac661b6-2032-11ea-ac56-0a9ccb4b29e2 gp2
rook-ceph-mon-b pvc-cde2d8b3-2032-11ea-ac56-0a9ccb4b29e2 gp2
rook-ceph-mon-c pvc-d0efbd9d-2032-11ea-ac56-0a9ccb4b29e2 gp2
lab-ossm-hub-data pvc-dc1d4bdc-2028-11ea-ad6c-0a9ccb4b29e2 gp2

The second half of the VOLUMEHANDLE column mostly matches what your RBD is named inside of Ceph. All you have to do is append csi-vol- to the front like this:
Get the full RBD name of our postgreSQL PV in one command

oc get pv pvc-4ca89d3d-2060-11ea-9a42-02dfa51cba90 -o jsonpath='{.spec.csi.volumeHandle}' | cut -d '-' -f 6- | awk '{print "csi-vol-"$1}'

> Note: You will need to adjust the above command to use your PVNAME name
Example output:
csi-vol-4cbba393-2060-11ea-9396-0a580a800406
Now we can check on the details of our RBD from inside of the tools pod:

TOOLS_POD=$(oc get pods -n openshift-storage -l app=rook-ceph-tools -o name)
oc rsh -n openshift-storage $TOOLS_POD rbd -p ocs-storagecluster-cephblockpool info csi-vol-4cbba393-2060-11ea-9396-0a580a800406

Example output:

rbd image 'csi-vol-4cbba393-2060-11ea-9396-0a580a800406':
size 5 GiB in 1280 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 95e4f3973e8
block_name_prefix: rbd_data.95e4f3973e8
format: 2
features: layering
op_features:
flags:
create_timestamp: Tue Dec 17 00:00:57 2019
access_timestamp: Tue Dec 17 00:00:57 2019
modify_timestamp: Tue Dec 17 00:00:57 2019

> Note: You will need to adjust the above command to use your RBD name

Resources and Feedback

To find out more about OpenShift Container Storage or to take a test drive, visit https://www.openshift.com/products/container-storage/.

If you would like to learn more about what the OpenShift Container Storage team is up to or provide feedback on any of the new 4.2 features, take this brief 3-minute survey.


저자 소개

Red Hatter since 2018, technology historian and founder of The Museum of Art and Digital Entertainment. Two decades of journalism mixed with technology expertise, storytelling and oodles of computing experience from inception to ewaste recycling. I have taught or had my work used in classes at USF, SFSU, AAU, UC Law Hastings and Harvard Law. 

I have worked with the EFF, Stanford, MIT, and Archive.org to brief the US Copyright Office and change US copyright law. We won multiple exemptions to the DMCA, accepted and implemented by the Librarian of Congress. My writings have appeared in Wired, Bloomberg, Make Magazine, SD Times, The Austin American Statesman, The Atlanta Journal Constitution and many other outlets.

I have been written about by the Wall Street Journal, The Washington Post, Wired and The Atlantic. I have been called "The Gertrude Stein of Video Games," an honor I accept, as I live less than a mile from her childhood home in Oakland, CA. I was project lead on the first successful institutional preservation and rebooting of the first massively multiplayer game, Habitat, for the C64, from 1986: https://neohabitat.org . I've consulted and collaborated with the NY MOMA, the Oakland Museum of California, Cisco, Semtech, Twilio, Game Developers Conference, NGNX, the Anti-Defamation League, the Library of Congress and the Oakland Public Library System on projects, contracts, and exhibitions.

 
UI_Icon-Red_Hat-Close-A-Black-RGB

채널별 검색

automation icon

오토메이션

기술, 팀, 인프라를 위한 IT 자동화 최신 동향

AI icon

인공지능

고객이 어디서나 AI 워크로드를 실행할 수 있도록 지원하는 플랫폼 업데이트

open hybrid cloud icon

오픈 하이브리드 클라우드

하이브리드 클라우드로 더욱 유연한 미래를 구축하는 방법을 알아보세요

security icon

보안

환경과 기술 전반에 걸쳐 리스크를 감소하는 방법에 대한 최신 정보

edge icon

엣지 컴퓨팅

엣지에서의 운영을 단순화하는 플랫폼 업데이트

Infrastructure icon

인프라

세계적으로 인정받은 기업용 Linux 플랫폼에 대한 최신 정보

application development icon

애플리케이션

복잡한 애플리케이션에 대한 솔루션 더 보기

Virtualization icon

가상화

온프레미스와 클라우드 환경에서 워크로드를 유연하게 운영하기 위한 엔터프라이즈 가상화의 미래