Skip to main content

Linux sysadmins want to know: Where did my disk space go?

A discussion of storage space measurement units and reserved space.
Image
Where did my space go?

Photo by Pixabay from Pexels

The hardware disparity: You just bought a nice new 1 TB drive, but your Linux system disk tools report it as 977 GB. Hey, where did that 23 GB go?

Nowhere, and depending on your opinion, it either is still or was never there! This difference comes from disk manufacturers and operating system programmers using different units of measurement. Disk manufacturers use true metric measurements, so 1000 GB = 1 TB. However, in the computer world, we don't operate at the power of 10, we operate by powers of two, so for us, 1024 GB = 1 TB. This difference in the units of measure is the difference between 1 TB or 977 GB.

Wow, that is incredibly annoying! Not to mention not really standards-based, which is something the open-source community is all about, right? Essentially, in the computer industry, we have been abusing the fact that 1024 is darn close to 1000. However, as we get larger and larger files, memory, drives, packets, etc., this difference is more and more noticeable.

So what is to be done? International standards bodies, like the International Electrotechnical Commission (IEC) and International Organization for Standardization (ISO), have created a unit of measurement to account for the difference in usage between true metric and the rough approximation that we have been using in computing. Enter the GiB and TiB (the other smaller and larger units follow the same nomenclature). Essentially, 1024 GiB = 1 TiB. This means you can still buy your 1 TB drive, but disk utilities should report it as 977 GiB to reflect that the disk utility is using a factor of 1024 for measurement, not the metric, 1000, measurement factor.

Modern versions of tools have already started using this method. For example, see my fdisk utility output from a Red Hat Enterprise Linux 8.2 system below:

Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes

Here is an article on binary prefix if you'd like even more details.

The filesystem discrepancy

The content below is based on a traditional extended filesystem that is deployed on a lot of different Linuxes, specifically ext4. However, Red Hat Enterprise Linux now deploys XFS as the default filesystem. Toward the end of this section, there is a specific discussion about how these topics affect XFS. For now, let's focus on ext4.

The situation: A user reports that the filesystem is full, but when I look at the disk free (df) output, I see the following:

[root@somehost ~]# df -h
Filesystem             Size  Used  Avail  Use%  Mounted on
/dev/vdb1              991M  924M     0  100%  /mnt

From the above output, you notice that the Size is reported at 991 MB, but the Used is 924 MB, clearly not full. Yet, when a user runs a command to consume more disk space, such as the dd command, they get the following message:

[user@somehost mnt]$ dd if=/dev/zero of=bigfile2
dd: writing to 'bigfile2': No space left on device

However, if they use touch to create a file, they see this:

[user@somehost mnt]$ touch file3

[user@somehost mnt]$ ls
bigfile  file3  lost+found

Additionally, if root creates a file by using dd, it works without a problem, as shown below:

[root@somehost mnt]# dd if=/dev/zero of=root-file count=100
100+0 records in
100+0 records out
51200 bytes (51 kB, 50 KiB) copied, 0.000748385 s, 68.4 MB/s

[root@somehost mnt]# ls
bigfile  file3  lost+found  root-file

Clearly, creating files is successful through dd or touch if you are root! What is going on?

The reason touch works is that the filesystem is out of data blocks for storing file contents. However, the filesystem has plenty of inodes (file pointers) available. That is confirmed by using another option with the df command:

[root@somehost mnt]# df -i
Filesystem             Inodes  IUsed   IFree IUse%  Mounted on
/dev/vdb1               65536     14   65522        1%  /mnt

The touch command creates an empty file, which means it consumes an inode to store the file metadata but does not consume any associated data blocks.

The root user's file creation works because the ext4 filesystem holds onto some reserved space that non-privileged users can not access. Only root, or root-owned processes, can write files to consume this disk space. For ext2, ext3, and ext4 filesystems, you can inspect this data, stored in the filesystem superblock, by using the tune2fs command. In the output below, I've removed most of the data reported by tune2fs so that I can show the total and reserved space:

[root@somehost mnt]# tune2fs -l /dev/vdb1
tune2fs 1.45.4 (23-Sep-2019)
Filesystem volume name:   <none>
Last mounted on:          /mnt

<<< OUTPUT ABRIDGED >>>

Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              65536
Block count:              261888
Reserved block count:     13094
Free blocks:              253029
Free inodes:              65525
First block:              0
Block size:               4096

<< OUTPUT ABRIDGED>>

In the above output, notice the Reserved block count parameter. According to the mkfs.ext4 man page, this defaults to five percent of the filesystem space. On this filesystem, it is roughly 51 MiB of space.

Reserved block count x Block size = amount in Bytes
13094 x 4096 = 53633024 Bytes

You can then convert the Bytes up to KiB or MiB by dividing the resulting number by 1024 to upsize the measurement units as you see fit. Here is an example:

53633024 Bytes / 1024 = 52376 KiB
52376 KiB / 1024 = 51.15 MiB

The filesystem as a whole is roughly 1 GiB, so a reserved block count of five percent would put the reserve block space at roughly 50 MiB.

This set of reserved blocks is what the root-owned dd used to store data in the above example that worked while the user continued not to store files on the apparently "full" filesystem. Administrators can use tune2fs to either increase or decrease the number of reserved blocks on a filesystem. However, if you increase this amount while the filesystem is active, there needs to be available, free space contiguous with the existing reserved block area of the disk to accumulate more reserved blocks. Generally, if you want to reserve more than the default, I recommend doing this when you format the filesystem by using an option to the mkfs.ext4 command. This process ensures that there is adequate contiguous space to allocate the number of reserved blocks you want.

Lastly, depending on the filesystem, the tool you are using to inspect the filesystem, or your distro, you may see tools report filesystem use greater than 100%. If you were to see a tool report 102% used, it is telling you that 100% of the user-accessible disk space on the filesystem is consumed, and you have consumed some of those reserved blocks as well.

So what about XFS? Earlier in the section, I mentioned that Red Hat Enterprise Linux 7 and 8 use XFS as their default filesystem format. XFS does utilize reserved blocks, but it reserves less than the extended filesystem formats and does not allow any user access to that space. The space is reserved for the XFS filesystem itself to utilize. Because the reserved space has a different purpose—to allow the filesystem to utilize the space for filesystem operations and to obscure the space from the system—it's less easy to report on using the XFS utilities. Still, it can be done using a combination of xfs_info, looking at the block count and size, converting that into KiB, and comparing it to the output of a df.

Wrap up

So where is your "lost" disk space? It is hidden in the different units of measure used to report disk capacity. How this space is reported and used varies by filesystem and by tools, as well.

[ Want to try out Red Hat Enterprise Linux? Download it now for free. ]

Topics:   Storage   Linux  
Author’s photo

Scott McBrien

Scott McBrien has worked as a systems administrator, consultant, instructor, content author, and general geek off and on for Red Hat since July 2001. More about me

Related Content

OUR BEST CONTENT, DELIVERED TO YOUR INBOX