In the above example we can see that there’s the central store and two edge stores (dcn0 and dcn1). Next, we’ll import an image into all three stores by passing
--stores central,dcn0,dcn1 (the CLI also accepts the
--all-stores true option to import to all stores).
glance image-create-via-import --disk-format qcow2 --container-format bare --name cirros --uri http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img --import-method web-download --stores central,dcn0,dcn1
Glance will automatically convert the image to RAW. After the image is imported into Glance we can get its ID.
IMG_ID=$(openstack image show cirros -c id -f value)
We can then check which sites have a copy of our image by running a command like the following:
openstack image show $IMG_ID | grep properties
The properties field which is output by the command will contain image metadata including the stores field showing all three stores: central, dcn0, and dcn1.
Create a volume from an image and boot it at a DCN site
Using the same image ID from the previous example, create a volume from the image at the dcn0 site. To specify we want the dcn0 site, we use the dcn0 availability zone.
openstack volume create --size 8 --availability-zone dcn0 pet-volume-dcn0 --image $IMG_ID
Once the volume is created identify its ID.
VOL_ID=$(openstack volume show -f value -c id pet-volume-dcn0)
Create a virtual machine using this volume as a root device by passing the ID. This example assumes a flavor, key, security group and network have already been created. Again we specify the dcn0 availability zone so that the instance boots at dcn0.
openstack server create --flavor tiny --key-name dcn0-key --network dcn0-network --security-group basic --availability-zone dcn0 --volume $VOL_ID pet-server-dcn0
From one of the DistributedComputeHCI nodes on the dcn0 site, we can execute an RBD command within the Ceph monitor container to directly query the volumes pool. This is for demonstration purposes, a regular user would not be able to do so.
sudo podman exec ceph-mon-$HOSTNAME rbd --cluster dcn0 -p volumes ls -l
NAME SIZE PARENT FMT PROT LOCKvolume-28c6fc32-047b-4306-ad2d-de2be02716b7 8 GiB images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076@snap 2 excl
The above example shows that the volume was CoW booted from the images pool. The VM, therefore, will boot quickly as only the changed data needs to be copied-- the unchanging data only needs to be referenced.
Copy a snapshot of the instance to another site
Create a new image on the dcn0 site that contains a snapshot of the instance created in the previous section.
openstack server image create --name cirros-snapshot pet-server-dcn0
Get the image ID of the new snapshot.
IMAGE_ID=$(openstack image show cirros-snapshot -f value -c id)
Copy the image from the dcn0 site to the central site.
glance image-import $IMAGE_ID --stores central --import-method copy-image
The new image at the central site may now be copied to other sites, used to create new volumes, booted as new instances and snapshotted.
Red Hat OpenStack Platform DCN also supports encrypted volumes with Barbican so that volumes can be encrypted at the edge on a per tenant key basis while keeping the secret securely stored in the central site (Red Hat recommends to use a Hardware Security Module).
If an image on a particular site is not needed anymore, we can delete it while keeping the other copies. To delete the snapshot taken on dcn0 while keeping its copy in central, we can use the following command:
glance stores-delete $IMAGE_ID --stores dcn0
This concludes our blog series on Red Hat OpenStack Distributed Compute Nodes. You should now have a better understanding on the key edge design considerations and how RH-OSP implements them as well as an overview of the deployment and day 2 operations.
Because what matters is the end user experience, we concluded with a typical number of steps users would take to manage their workloads at the edge.For an additional walkthrough of design considerations and a best-practice approach to building an edge architecture, check out the recording from our webinar titled “Your checklist for a successful edge deployment