Skip to main content

Finding block and file OCP application contents in ODF: The infrastructure

Part one of a three part series on mapping application data in an OpenShift Data Foundation cluster.
Image
Regex and grep: data flow and building blocks

Red Hat OpenShift Data Foundation—previously Red Hat OpenShift Container Storage—is software-defined storage for containers. Engineered as the data and storage services platform for Red Hat OpenShift, Red Hat OpenShift Data Foundation (ODF) helps teams develop and deploy applications quickly and efficiently across clouds. It is a powerful solution that allows block, file, and object storage.

For those who are used to a traditional storage solution, perhaps a question that arises is how a user can map where the objects of an application are stored within the ODF cluster since the concept of distributed and scalable storage that ODF brings presents considerable differences from other solutions. In this article, you explore how one can do this mapping for troubleshooting scenarios.

I've have a lot of ground to cover with this topic, so the content is split into three articles. The first article (this one) sets the stage by defining the environment and necessary components. Part two covers block storage information and part three handles file storage mapping.

The infrastructure

To maintain the practicality of this article, I will not go into details about the architecture of the OpenShift Container Platform (OCP) cluster or other information irrelevant to the activity. Just know that I am using an OCP cluster on AWS with an internal ODF cluster deployment on three separate workers in different AZs with a device set of 500GB each (1.5TB total). These are the versions that both OCP and ODF are using:

[alexon@bastion ~]$ oc version
Client Version: 4.7.4
Server Version: 4.7.11
Kubernetes Version: v1.20.0+75370d3

[alexon@bastion ~]$ oc -n openshift-storage get csv

NAME DISPLAY VERSION REPLACES PHASE
elasticsearch-operator.5.0.3-6 OpenShift Elasticsearch Operator 5.0.3-6  
elasticsearch-operator.4.6.0-202103010126.p0 Succeeded
ocs-operator.v4.7.0 OpenShift Container Storage 4.7.0 ocs-operator.v4.6.4 Succeeded

[alexon@bastion ~]$ oc get nodes -l cluster.ocs.openshift.io/openshift-storage=

NAME STATUS ROLES   
AGE VERSION
ip-10-0-143-192.ec2.internal Ready   
worker 6d23h v1.20.0+75370d3
ip-10-0-154-20.ec2.internal Ready   
worker 6d23h v1.20.0+75370d3
ip-10-0-171-63.ec2.internal Ready   
worker 6d23h v1.20.0+75370d3

Now, to be able to do the necessary analysis on your ODF block and file backend storage, which in this case is Ceph, you need a specific tool. This tool is the Rook toolbox, a container with common tools used for rook debugging and testing. It is available in the Rook official GitHub repository, and you can read more about it here.

Using the toolbox

The first task is to use the toolbox spec to run the toolbox pod in an interactive mode that is available at the link provided above, or download directly from this link.

Save it as a YAML file, then launch the rook-ceph-tools pod:

[alexon@bastion ~]$ oc apply -f toolbox.yml -n openshift-storage

As you can see below, my ODF stack is running smoothly in my openshift-storage project. I have the appropriate storage classes configured, with ocs-storagecluster-ceph-rbd being the default SC. Also, I already have the rook-ceph-tools pod deployed and running:

[alexon@bastion ~]$ oc project openshift-storage
Now using project "openshift-storage" on server "https://api.example.com:6443":
[alexon@bastion ~]$ oc get pods

NAME                                                             
READY STATUS RESTARTS  
AGE
csi-cephfsplugin-c86r8                                           
3/3 Running 0         
3h13m
csi-cephfsplugin-lk8cq                                           
3/3 Running 0         
3h14m
csi-cephfsplugin-pqhrh  
                                         3/3 Running  
0 3h13m
csi-cephfsplugin-provisioner-6878df594-8547z 6/6 Running  
0 3h14m
csi-cephfsplugin-provisioner-6878df594-bl7hn 6/6 Running  
0 3h14m
csi-rbdplugin-ff4hz                                              
3/3 Running 0         
3h13m
csi-rbdplugin-m2cf7 3/3 Running  
0 3h13m
csi-rbdplugin-provisioner-85f54d8949-h47td 6/6 Running  
0 3h14m
csi-rbdplugin-provisioner-85f54d8949-mstpp 6/6 Running  
0 3h14m
csi-rbdplugin-t8tpg                                              
3/3 Running 0         
3h14m
noobaa-core-0 1/1 Running  
0 3h11m
noobaa-db-pg-0                                                   
1/1 Running 0         
3h14m
noobaa-endpoint-75644c884-rv7zn 1/1 Running  
0 3h10m
noobaa-operator-5b4ccb78d6-lz66m 1/1 Running  
0 3h14m
ocs-metrics-exporter-db8d78475-ch2mk 1/1 Running  
0 3h14m
ocs-operator-776b95f74b-mzmwq 1/1    
Running 0 3h14m
rook-ceph-crashcollector-ip-10-0-143-192-59fc4bff5b-22sv4 1/1    
Running 0 3h12m
rook-ceph-crashcollector-ip-10-0-154-20-65d656dbb4-4q6xk 1/1 Running  
0 3h13m
rook-ceph-crashcollector-ip-10-0-171-63-74f8554574-f99br 1/1 Running  
0 3h13m
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-b89f56667z5fj 2/2    
Running 0 3h12m
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-558f8bbct2lrv 2/2    
Running 0 3h12m
rook-ceph-mgr-a-6d54bfd95d-7xdwp 2/2 Running  
0 3h11m
rook-ceph-mon-a-67fc544d5f-f7ghz 2/2 Running  
0 3h13m
rook-ceph-mon-b-6c6b9565d7-cbgqz 2/2 Running  
0 3h11m
rook-ceph-mon-c-6c44cb9765-9h828 2/2 Running  
0 3h13m
rook-ceph-operator-58c6f9696b-b2qnm 1/1 Running  
0 3h14m
rook-ceph-osd-0-66cfd8cd5f-dgvpp 2/2 Running  
0 3h10m
rook-ceph-osd-1-5c75dbffdd-qg8gs 2/2 Running  
0 3h9m
rook-ceph-osd-2-7c555c7578-jq8sz 2/2 Running  
0 3h8m
rook-ceph-tools-78cdfd976c-b6wk7 1/1 Running  
0 179m

[alexon@bastion ~]$ oc get sc

NAME                                   
PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2                                    
kubernetes.io/aws-ebs                  
Delete         
WaitForFirstConsumer true 6d22h
gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 6d22h
ocs-storagecluster-ceph-rbd (default) openshift-storage.rbd.csi.ceph.com Delete Immediate true 6d2h
ocs-storagecluster-cephfs              
openshift-storage.cephfs.csi.ceph.com  
Delete Immediate true 6d2h
openshift-storage.noobaa.io            
openshift-storage.noobaa.io/obc        
Delete Immediate false 6d2h

Get into your toolbox pod to gather more information about your storage cluster:

[alexon@bastion ~]$ toolbox=$(oc get pods -n openshift-storage -l app=rook-ceph-tools -o name)
[alexon@bastion ~]$ oc rsh -n openshift-storage $toolbox

Run the known commands for administering a Ceph cluster to check some information about your ODF cluster:

sh-4.4$ ceph -s

  cluster:
    id: ef49037e-1257-462c-b8da-053f6a9ce9b2
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,b,c (age 3h)
    mgr: a(active, since 3h)
    mds: ocs-storagecluster-cephfilesystem:1 {0=ocs-storagecluster-cephfilesystem-a=up:active} 1 up:standby-replay
    osd: 3 osds: 3 up (since 3h), 3 in (since 6d)
 
  task status:
    scrub status:
       
mds.ocs-storagecluster-cephfilesystem-a: idle
       
mds.ocs-storagecluster-cephfilesystem-b: idle
 
  data:
    pools: 3 pools, 96 pgs
    objects: 22.28k objects, 85 GiB
    usage: 254 GiB used, 1.3 TiB / 1.5 TiB avail
    pgs: 96 active+clean
 
  io:
    client: 853 B/s rd, 2.1 MiB/s wr, 1 op/s rd, 171 op/s wr
 
sh-4.4$ ceph df

RAW STORAGE:
    CLASS SIZE       
AVAIL USED RAW USED %RAW USED 
    ssd 1.5 TiB 1.3 TiB    
251 GiB 254 GiB 16.54 
    TOTAL 1.5 TiB    
1.3 TiB 251 GiB 254 GiB 16.54 
 
POOLS:
    POOL                                          
ID STORED OBJECTS USED       
%USED MAX AVAIL 
   
ocs-storagecluster-cephblockpool 1 84 GiB 22.25k    
251 GiB 19.27 351 GiB 
   
ocs-storagecluster-cephfilesystem-metadata 2    
1.4 MiB 25 4.2 MiB 0      
351 GiB 
   
ocs-storagecluster-cephfilesystem-data0 3 0 B 0 0 B 0      
351 GiB 

sh-4.4$ ceph osd status

+----+------------------------------+-------+-------+--------+---------+--------+---------+-----------+
| id |            
host | used | avail | wr ops | wr data | rd ops | rd data | state |
+----+------------------------------+-------+-------+--------+---------+--------+---------+-----------+
| 0 | ip-10-0-171-63.ec2.internal | 84.6G | 427G | 87 | 947k 
| 0 |    
0 | exists,up |
| 1 | ip-10-0-143-192.ec2.internal | 84.6G | 
427G | 61 |  
510k | 0  
| 0 | exists,up |
| 2 | ip-10-0-154-20.ec2.internal | 84.6G | 427G | 96  
| 1072k |   
2 | 106  
| exists,up |
+----+------------------------------+-------+-------+--------+---------+--------+---------+-----------+

sh-4.4$ ceph osd tree

ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF 
 -1 1.50000 root default                                                                
 -5 1.50000 region us-east-1                                                        
 -4 0.50000 zone us-east-1a                                                     
 -3 0.50000 host ocs-deviceset-gp2-csi-1-data-085b8h                         
  1 ssd 0.50000 osd.1 up 1.00000 1.00000 
-10 0.50000 zone us-east-1b                                                     
 -9 0.50000 host ocs-deviceset-gp2-csi-2-data-0n9lkb                         
  2 ssd 0.50000 osd.2 up 1.00000 1.00000 
-14      
0.50000 zone us-east-1c                                                     
-13      
0.50000 host ocs-deviceset-gp2-csi-0-data-0gvt22                         
  0 ssd 0.50000 osd.0 up 1.00000 1.00000 

sh-4.4$ rados df

POOL_NAME USED OBJECTS 
CLONES COPIES MISSING_ON_PRIMARY UNFOUND 
DEGRADED RD_OPS RD   
WR_OPS WR USED COMPR 
UNDER COMPR
ocs-storagecluster-cephblockpool 251 GiB 22266      
0 66798 0 0 0 
475905 34 GiB 37496563 
634 GiB 0 B 0 B
ocs-storagecluster-cephfilesystem-data0 0 B 0      
0 0 0 0 0     
30 14 KiB 42  
24 KiB 0 B 0 B
ocs-storagecluster-cephfilesystem-metadata 4.2 MiB      
25 0 75 0       
0 0 142734 
78 MiB 706 1.7 MiB 0 B 0 B

total_objects   
22291
total_used 254 GiB
total_avail 1.3 TiB
total_space 1.5 TiB

sh-4.4$ ceph fs status

ocs-storagecluster-cephfilesystem - 0 clients
=================================
+------+----------------+-------------------------------------+---------------+-------+-------+
| Rank |    
State | MDS | Activity  
| dns | inos |
+------+----------------+-------------------------------------+---------------+-------+-------+
| 0 |    
active | ocs-storagecluster-cephfilesystem-a | Reqs:   
0 /s | 59 |  
32 |
| 0-s | standby-replay | ocs-storagecluster-cephfilesystem-b | Evts: 0 /s |  
68 | 24 |
+------+----------------+-------------------------------------+---------------+-------+-------+
+--------------------------------------------+----------+-------+-------+
|                   
Pool |  
type | used | avail |
+--------------------------------------------+----------+-------+-------+
| ocs-storagecluster-cephfilesystem-metadata | metadata | 4260k | 350G |
| 
ocs-storagecluster-cephfilesystem-data0  
| data | 0 | 350G |
+--------------------------------------------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
+-------------+
MDS version: ceph version 14.2.11-147.el8cp (1f54d52f20d93c1b91f1ec6af4c67a4b81402800) nautilus (stable)

sh-4.4$ ceph fs volume ls

[
    {
       
"name": "ocs-storagecluster-cephfilesystem"
    }
]

As you can see above, the ocs-storagecluster-cephblockpool and the ocs-storagecluster-cephfilesystem-data0 pools are created by default. The first is used by RBD for block storage, and the second is used by CephFS for file storage. Keep this in mind, as these are the pools that you'll use in your investigations.

[ Getting started with containers? Check out this free course. Deploying containerized applications: A technical overview. ]

Wrap up

Here in part one, I've defined the issue of mapping application objects within an ODF cluster. I covered the Rook toolbox container and the common debugging and testing tools contained within it. I also established the infrastructure you'll use in the next two articles of the series. From here, go to article two for coverage of mapping block storage within the ODF cluster. After that, read part three for file storage mapping and a wrap-up of the project.

Check out these related articles on Enable Sysadmin

Topics:   Containers   Openshift   Linux  
Author’s photo

Alexon Oliveira

Alexon has been working as a Technical Account Manager in the CEE GCS LATAM team since 2018, focusing on Infrastructure and Management, Integration and Automation, Cloud Computing, and Storage solutions. More about me

On Demand: Red Hat Summit 2021 Virtual Experience

Relive our April event with demos, keynotes, and technical sessions from
experts, all available on demand.

Related Content

OUR BEST CONTENT, DELIVERED TO YOUR INBOX