With Red Hat Enterprise Linux (RHEL), it’s now possible to configure and deploy a capacity-optimized NFS server - shared storage that costs less and stores more. Using the new Virtual Data Optimizer (VDO) module introduced in Red Hat Enterprise Linux 7.5, you can provide shared optimized storage for backups, virtual desktops, virtual servers, and for containers.
This post demonstrates how to create an advanced Network Attached Storage server that provides advanced capacity optimization services by combining VDO with many of the existing Red Hat features such as thin provisioning and snapshot capabilities. Combined with the high performance NFS server implementation provided by Red Hat, you get a powerful solution that can be deployed on-premises or in the cloud. For this example I’m using an industry-standard server with 10Gb Ethernet connectivity and SSD storage. In later articles I’ll use the NFS server to store various types of data and look at the results in terms of efficiency and performance.
To set up my capacity optimized NFS server I first need to configure a VDO volume on my server. I’ll then use the Logical Volume Manager (LVM) to configure a thinly provisioned pool of storage to offer fast scalable snapshotting capabilities on top of that VDO volume. Finally, I will carve that pool into separate file systems that I can export via NFS.
The end result will look like this:
Before I start, I need to make sure that the proper packages are installed on my system:
# yum install kmod-vdo vdo nfs-utils rpcbind snapper
Configuring VDO
Now I can move on to configuring my VDO device. VDO is a device mapper module which adds data reduction capabilities to the Linux block storage stack. VDO uses inline compression and data deduplication techniques to transparently shrink data as it is being written to storage media.
For my configuration, I am going to start by making the following assumptions:
Underlying block device name
I plan to address my storage through the device /dev/md0.
Addressable space
I’ve got 25 TB of SSD storage, with 20 TB addressable after RAID protection overhead.
Presented space
I’ve got 20 TB of addressable space, but I’m anticipating a 4:1 data reduction rate, so I intend to present 80 TB of storage.
Name of the device for VDO to present
I’m going to name my device vdo0
With this information, I can determine how I should configure VDO for use in this environment. For example, the default vdoSlabSize is 2 GB which is large enough to support up to 16 TB of underlying storage. But I know that I have a storage device that is 20 TB. So I need to use a vdoSlabSize of 32 GB which allows me to support up to a maximum of 256 TB of storage.
I also know that with I’m going to want to locate duplicate blocks across my 20 TB of physical data. By default, vdo will use a standard index and allocate 250 MiB of RAM. This provides adequate coverage for up to 256 GB of data. To handle the larger dataset, I must enable sparseIndex, a feature of VDO that consumes 10x the meta data overhead on disk but is able to track 10 times as many blocks.
I’ll also use the indexMem parameter to specify a larger amount of memory to use when tracking duplicate blocks by allocating 2 GB of RAM to VDO’s deduplicaton index which, with sparse indexing, will allow it to track up to 20 TB of unique chunks at a time.
So now I create my VDO device using the vdo create command as follows:
# vdo create --device=/dev/md0 --sparseIndex=enabled --indexMem=2 --name=vdo0
This command creates a VDO device as /dev/mapper/vdo0
I will verify that my VDO volume was created correctly using the vdostats command.
You can learn more about VDO configuration here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/vdo
Configuring Thin Volumes for Snapshots
I’m now ready to configure a set of thin logical volumes on top of the my VDO device. Each of these will have a file system create on it which can be exported via NFS. To start I’ll label my VDO volume as a physical volume (PV) and then volume group (VG) from it.
# pvcreate /dev/mapper/vdo0 # vgcreate vg00 /dev/mapper/vdo0
I then verify that your volume group was created correctly using the vgs command.
Now I’ll create a thin pool on top of the vg00 volume group which will reside above my VDO volume. Using a thin pool allows us to create logical volumes that can have advanced management capabilities and high performance snapshots. It is important that the thin pool pass down discards to the VDO layer so it can free up space we explicitly set the discards=passdown option on creation. We configure a thin volume presenting 10 TB of storage on top of our thin pool of 40 TB as follows:
# lvcreate --discards=passdown -L 40T -T vg00/lvpool0 -V 10T -n lvol0
I then create 3 additional 10T volumes as follows:
# lvcreate -V 10T --discards=passdown -T vg00/lvpool0 -n lvol1 # lvcreate -V 10T --discards=passdown -T vg00/lvpool0 -n lvol2 # lvcreate -V 10T --discards=passdown -T vg00/lvpool0 -n lvol3
I then verify that each pool was created properly using the lvs command.
Additional information on Thin Volumes can be found here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/LV#thinly_provisioned_volume_creation
Configuring File Systems
Next, I’ll configure our a file systems to use the logical volume I’ve just created. When I create each filesystem, I’ll use the -K option to skip sending discards at creation time. This is generally recommended for storage above VDO volumes since it allows VDO to track space as it’s freed by the file system.
# mkfs.xfs -K /dev/vg00/lvol0 # mkfs.xfs -K /dev/vg00/lvol1 # mkfs.xfs -K /dev/vg00/lvol2 # mkfs.xfs -K /dev/vg00/lvol3
At this point we need to create directories to mount the file systems on:
# mkdir -p /shares/fs0; chmod 755 /shares/fs0 # mkdir -p /shares/fs1; chmod 755 /shares/fs1 # mkdir -p /shares/fs2; chmod 755 /shares/fs2 # mkdir -p /shares/fs3; chmod 755 /shares/fs3
To ensure that each new file system will mount automatically at startup, I add entries in the /etc/fstab file. I use the following to ensure that the VDO module is loaded before I attempt my mount operation :
/dev/vg00/lvol0 /shares/fs0 xfs defaults,discard,x-systemd.requires=vdo.service 0 0 /dev/vg00/lvol1 /shares/fs1 xfs defaults,discard,x-systemd.requires=vdo.service 0 0 /dev/vg00/lvol2 /shares/fs2 xfs defaults,discard,x-systemd.requires=vdo.service 0 0 /dev/vg00/lvol3 /shares/fs3 xfs defaults,discard,x-systemd.requires=vdo.service 0 0
We mount our new file systems on those directories :
# mount -a
Enabling Snapper Snapshots
Point-in-time snapshots are an important feature of modern NAS devices. We’ll configure snapper to simplify snapshot management on this system:
# snapper -c lvol0 create-config -f "lvm(xfs)" /shares/fs0 # snapper -c lvol1 create-config -f "lvm(xfs)" /shares/fs1 # snapper -c lvol2 create-config -f "lvm(xfs)" /shares/fs2 # snapper -c lvol3 create-config -f "lvm(xfs)" /shares/fs3
To learn more about how to take point-in-time snapshots with snapper see: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/storage_administration_guide/#ch-snapper
Exporting the NFS Volumes
At this point, I’m ready to export my file systems with NFS for use by clients. We edit the /etc/exports file and add entries similar to the following. If you’re following along, you should substitute each client’s address for the 192.168.1.18 used in the example. Synchronous storage should always be used when running Virtual Machines on an NFS host. If you are using a volume for backups, or for shared user directories, you may want to use the async option for NFS as this can provide additional performance.
/shares/fs0 192.168.1.18(rw,sync,no_root_squash,no_subtree_check) /shares/fs1 192.168.1.18(rw,sync,no_root_squash,no_subtree_check) /shares/fs2 192.168.1.18(rw,sync,no_root_squash,no_subtree_check) /shares/fs3 192.168.1.18(rw,sync,no_root_squash,no_subtree_check)
Finally I open up the NFS port on my server and start the nfs services that will allow me to share my file system with the following commands:
# firewall-cmd --zone=public --add-port=2049/tcp --permanent # firewall-cmd --reload # systemctl enable rpcbind; systemctl start rpcbind
I verify that NFS is up and running using the systemctl status nfs command.
More information about NFS server configuration can be found here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/storage_administration_guide/#ch-nfs
So now I have a fully functioning capacity optimized NFS server. In my next post I’ll take a look at measuring space savings and performance of this configuration when using it with Red Hat Virtualization.
About the author
Louis Imershein is a Product Manager at Red Hat focussed on Microsoft SQL Server and database workloads. He is responsible for working with Microsoft and Red Hat engineering to ensure that SQL Server performance, management, and security is optimized for Red Hat platforms. For more than 30 years, Louis has worked in technical support, engineering, software architecture, and product management on a wide range of OS, management, security, and storage software projects. Louis joined Red Hat as part of the acquisition of Permabit Technology Corporation, where he was VP of Product.
Browse by channel
Automation
The latest on IT automation for tech, teams, and environments
Artificial intelligence
Updates on the platforms that free customers to run AI workloads anywhere
Open hybrid cloud
Explore how we build a more flexible future with hybrid cloud
Security
The latest on how we reduce risks across environments and technologies
Edge computing
Updates on the platforms that simplify operations at the edge
Infrastructure
The latest on the world’s leading enterprise Linux platform
Applications
Inside our solutions to the toughest application challenges
Original shows
Entertaining stories from the makers and leaders in enterprise tech
Products
- Red Hat Enterprise Linux
- Red Hat OpenShift
- Red Hat Ansible Automation Platform
- Cloud services
- See all products
Tools
- Training and certification
- My account
- Customer support
- Developer resources
- Find a partner
- Red Hat Ecosystem Catalog
- Red Hat value calculator
- Documentation
Try, buy, & sell
Communicate
About Red Hat
We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.
Select a language
Red Hat legal and privacy links
- About Red Hat
- Jobs
- Events
- Locations
- Contact Red Hat
- Red Hat Blog
- Diversity, equity, and inclusion
- Cool Stuff Store
- Red Hat Summit