Greetings from Red Hat's storage architect team! With this post, we're kicking off a series in which we'll demonstrate a step-by-step deployment of a stateful application on OpenShift Container Platform (OCP) using OpenShift Container Storage (OCS). This series, based on the 3.11 version of both OCP and OCS, will not cover how to install OCP or OCS.
We'll start with creating one MySQL pod (using OCP StatefulSets and OCS), and then add the application that uses the MySQL database on persistent storage. As we progress in this series, we’ll show more advanced topics, such as OCP multi-tenant scenarios, MySQL performance on OCS, failover scenarios, and more.
OpenShift on AWS test environment
All the posts in this series use an OCP-on-AWS setup that includes 8 EC2 instances deployed as 1 master node, 1 infra node, and 6 worker nodes that also run OCS gluster and heketi pods. The 6 worker nodes are basically the storage provider (OCS) and persistent storage consumers (MySQL). As shown in the following, the OCS worker nodes are of instance type m5.2xlarge with 8 vCPUs, 32 GB Mem, and 3x100GB gp2 volumes attached to each node for OCP and a single 1TB gp2 volume for OCS storage cluster. The AWS region us-west-2 has Availability Zones (AZs) us-west-2a, us-west-2b, us-west-2c, and the 6 worker nodes are spread across the 3 AZs, two nodes in each AZ. This means the OCS storage cluster is "stretched" across these 3 AZs.
MySQL deployment with StatefulSets
This post revolves around deploying a MySQL pod using OCS and StatefulSets (STS), so let's get started.
Stateful applications need persistent volume(s) (PVs) to support failover scenarios in which, when a pod (or pods) move(s) to a different worker node, the data it/they use(s) must be persistent after the pod(s) move(s).
STS were introduced in Kubernetes 1.9 and have a few advantages over “simple” deployments:
- Pod creation can be ordered when creating (and reversed ordered when scaling down). This is especially important in master/slave scenarios and/or distributed databases.
- Pods can have an easy naming convention and retain the name when migrating from one node to another after a failover.
- The persistent volume claims (PVCs) are not deleted when the STS is deleted to keep the data intact for future usage.
The first step in creating a PVC is making sure we have a storage class we can use to dynamically create the volume in OCP:
oc get sc NAME PROVISIONER AGE glusterfs-storage kubernetes.io/glusterfs 9d gp2 (default) kubernetes.io/aws-ebs 21d gp2-xfs kubernetes.io/aws-ebs 18d
As you can see, we have 3 storage classes in our OCP cluster. For the MySQL deployment, we will be using the glusterfs-storage class, which is created with the installation of OCS when deploying OCP using Ansible playbooks and specific OCS inventory file options. This means that every time a claim is made for storage it will be the glusterfs-storage class that will provide it because it is configured into our STS definition file. If you want to see the content of any of the storageclass (SC) resources, run “oc get sc <storageclass_name> -o yaml”.
Because we are going to use STS, one of the requirements is to create a headless service for our MySQL application. We’re going to use the following yaml file:
cat headless-service-mysql.yaml apiVersion: v1 kind: Service metadata: name: mysql-ocs labels: app: mysql-ocs spec: ports: - port: 3306 name: mysql-ocs clusterIP: None selector: app: mysql-ocs
And then create the service.
oc create -f headless-service-mysql.yaml service/mysql-ocs created $oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-ocs ClusterIP None <none> 3306/TCP 6s
Now that we have a storageclass and a headless service, let's look at our STS yaml. This is a simple example, and as we progress in this series, we'll update and add to this file.
Note: It is neither secure nor recommended to have plain-text password sets in yaml files. Instead, use secrets. For our example, to make things simple, we’ll use plain text.
cat mysql-sts.yaml --- apiVersion: apps/v1 kind: StatefulSet metadata: name: mysql-ocs spec: selector: matchLabels: app: mysql-ocs serviceName: "mysql-ocs" podManagementPolicy: Parallel replicas: 1 template: metadata: labels: app: mysql-ocs spec: terminationGracePeriodSeconds: 10 containers: - name: mysql-ocs image: mysql:5.7 env: - name: MYSQL_ROOT_PASSWORD value: password - name: MYSQL_DATABASE value: wordpress - name: MYSQL_USER value: admin - name: MYSQL_PASSWORD value: secret ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-ocs-data mountPath: /var/lib/mysql volumeClaimTemplates: - metadata: name: mysql-ocs-data spec: storageClassName: glusterfs-storage accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 8Gi
Most of the container definitions are similar to that of a “DeploymentConfig” type. We're using the headless service “mysql-ocs” that we previously created and specified MySQL 5.7 as the image to be used. The interesting part is at the bottom of the preceding file; the “volumeClaimTemplates” definition is how we create a persistent volume (PV), then claim it (PVC) and attach it to the newly created MySQL pod. As you can also see, we're using the storage class we have from the OCP/OCS installation (glusterfs-storage), and we request a volume size of 8 GB to be created and use in a “ReadWriteOnce” mode.
To create our STS, we run the following command:
oc create -f mysql-sts.yaml
statefulset.apps/mysql-ocs created
Deployment validation
Let's check that the pod is running. Please note that, depending on the hardware used, the MySQL container image download speed, size of volume requested, and availability of existing PVCs, this action can take between from a few seconds to around a minute.
oc get pods NAME READY STATUS RESTARTS AGE mysql-ocs-0 1/1 Running 0 31s
Let's look at the PVC we created with this STS.
oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-ocs-data-mysql-ocs-0 Bound pvc-cb25b2c0-3a12-11e9-96fc-02e7350e98d2 8Gi RWO glusterfs-storage 1m
And the PV that is associated with the PVC:
oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM . STORAGECLASS REASON AGE pvc-cb25b2c0-3a12-11e9-96fc-02e7350e98d2 8Gi RWO . Delete Bound sagy/mysql-ocs-data-mysql-ocs-0 glusterfs-storage 3m
If you want to see the connection/relationship between Kubernetes, gluster, heketi, and our persistent storage volume, we can run a few commands to show it. We know the PV name from our “oc get pvc” we ran previously, so we’ll use “oc describe” and search for Path.
oc describe pv pvc-d1fc687c-3a14-11e9-96fc-02e7350e98d2|grep Path
Path: vol_82f64c461e4796213160f30519f318f8
In our case, the volume name is vol_82f64c461e4796213160f30519f318f8, and this is the same volume name in gluster., If you log in to the container inside the MySQL pod, we can see the same volume and the directory it is mounted to.
oc rsh mysql-ocs-0
$ df -h|grep vol_82f64c461e4796213160f30519f318f8
172.16.26.120:vol_82f64c461e4796213160f30519f318f8 8.0G 325M 7.7G 4%
/var/lib/mysql
We can see that the volume is mounted on /var/lib/mysql (what we specified in our STS yaml file) and size is 8.0G.
If we want to check heketi for more info, we must first make sure that heketi-client package is installed on the server you're trying to run it from. The following file must be sourced to export the environment before using heketi-client commands.
cat heketi-export-app-storage export HEKETI_POD=$(oc get pods -l glusterfs=heketi-storage-pod -n app-storage -o jsonpath='{.items[0].metadata.name}') export HEKETI_CLI_SERVER=http://$(oc get route/heketi-storage -n app-storage -o jsonpath='{.spec.host}') export HEKETI_CLI_USER=admin export HEKETI_CLI_KEY=$(oc get pod/$HEKETI_POD -n app-storage -o jsonpath='{.spec.containers[0].env[?(@.name=="HEKETI_ADMIN_KEY")].value}') export HEKETI_ADMIN_KEY_SECRET=$(echo -n ${HEKETI_CLI_KEY} | base64)
The heketi volume name is the gluster volume name without the “vol_”, which can be found using the following command:
oc describe pv pvc-d1fc687c-3a14-11e9-96fc-02e7350e98d2|grep Path|awk '{print
$2}'|awk -F 'vol_' '{print $2}'
82f64c461e4796213160f30519f318f8
And now, after we made sure heketi-cli is installed and sourced the environment variables, the heketi-cli command can be used to get more information about this gluster volume.
heketi-cli volume info 82f64c461e4796213160f30519f318f8 Name: vol_82f64c461e4796213160f30519f318f8 Size: 8 Volume Id: 82f64c461e4796213160f30519f318f8 Cluster Id: f05418936dc63638041af2831914c37d Mount: 172.16.26.120:vol_82f64c461e4796213160f30519f318f8 Mount Options: backup-volfile-servers=172.16.53.212,172.16.39.190,172.16.56.45,172.16.27.161 ,172.16.44.7 Block: false Free Size: 0 Reserved Size: 0 Block Hosting Restriction: (none) Block Volumes: [] Durability Type: replicate Distributed+Replica: 3 Snapshot Factor: 1.00
Deleting StatefulSet and persistent storage
So far, we've seen how to create a MySQL pod using STS and OCS storage, but what happens when we want to delete a pod or the storage? First, let's look at our PVC.
oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-ocs-data-mysql-ocs-0 Bound pvc-d1fc687c-3a14-11e9-96fc-02e7350e98d2 8Gi RWO glusterfs-storage 20h
Now let's delete our STS for MySQL.
$ oc delete -f mysql-sts.yaml statefulset.apps "mysql-ocs" deleted
And let's check the PVC again after MySQL STS is deleted.
oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-ocs-data-mysql-ocs-0 Bound pvc-d1fc687c-3a14-11e9-96fc-02e7350e98d2 8Gi RWO glusterfs-storage 20h
As you can see. the PVC remains with the data intact and will be used again if we will redeploy the same STS.
If you want to delete the PVC, run the following command:
$ oc delete pvc mysql-ocs-data-mysql-ocs-0 persistentvolumeclaim "mysql-ocs-data-mysql-ocs-0" deleted
And you monitor the PV and watch how it gets deleted, as well (PV is first released).
oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-d1fc687c-3a14-11e9-96fc-02e7350e98d2 8Gi RWO Delete Released sagy/mysql-ocs-data-mysql-ocs-0 glusterfs-storage 20h
And if we query again, the PV will be gone and deleted.
oc get pvc
No resources found.
Conclusion
In this post, we've shown the first step toward running on OCP an application that needs persistent data. We used the glusterfs-storage storageclass that is provided by OCS to create a PVC and attached the volume to a MySQL pod. We automated the process using an STS. We also explained the relationship between OCS, heketi, the PV, PVC, and the MySQL pod.
In our next post we'll show how to connect a WordPress pod to our database pod.
執筆者紹介
Sagy Volkov is a former performance engineer in ScaleIO, he initiated the performance engineering group and the ScaleIO enterprise advocates group, and architected the ScaleIO storage appliance reporting to the CTO/founder of ScaleIO. He is now with Red Hat as a storage performance instigator concentrating on application performance (mainly database and CI/CD pipelines) and application resiliency on Rook/Ceph.
He has spoke previously in Cloud Native Storage day (CNS), DevConf 2020, EMC World and the Red Hat booth in KubeCon.
チャンネル別に見る
自動化
テクノロジー、チームおよび環境に関する IT 自動化の最新情報
AI (人工知能)
お客様が AI ワークロードをどこでも自由に実行することを可能にするプラットフォームについてのアップデート
オープン・ハイブリッドクラウド
ハイブリッドクラウドで柔軟に未来を築く方法をご確認ください。
セキュリティ
環境やテクノロジー全体に及ぶリスクを軽減する方法に関する最新情報
エッジコンピューティング
エッジでの運用を単純化するプラットフォームのアップデート
インフラストラクチャ
世界有数のエンタープライズ向け Linux プラットフォームの最新情報
アプリケーション
アプリケーションの最も困難な課題に対する Red Hat ソリューションの詳細
オリジナル番組
エンタープライズ向けテクノロジーのメーカーやリーダーによるストーリー
製品
ツール
試用、購入、販売
コミュニケーション
Red Hat について
エンタープライズ・オープンソース・ソリューションのプロバイダーとして世界をリードする Red Hat は、Linux、クラウド、コンテナ、Kubernetes などのテクノロジーを提供しています。Red Hat は強化されたソリューションを提供し、コアデータセンターからネットワークエッジまで、企業が複数のプラットフォームおよび環境間で容易に運用できるようにしています。
言語を選択してください
Red Hat legal and privacy links
- Red Hat について
- 採用情報
- イベント
- 各国のオフィス
- Red Hat へのお問い合わせ
- Red Hat ブログ
- ダイバーシティ、エクイティ、およびインクルージョン
- Cool Stuff Store
- Red Hat Summit