Defrag.
AragonX
aragonx at dcsnow.com
Thu Oct 6 19:31:43 UTC 2005
<quote who="Dave Mitchell">
> Fragmentation, in the Windows sense, is where one file is stored as lots
> of small blocks spread all over the disk. Accessing the whole file becomes
> very slow.
>
> Fragmentation, in the UNIX fsck sense, is the percentage of big blocks (eg
> 8k) that have been split into small (eg 1K) subblocks to allow for the
> small
> chunk of data at the end of a file to be stored efficiently. For example,
> a file that is 18K in size will use two 8K blocks plus a 2K chunk of an
> 8K block that has been split. Fragmentation is this sense is harmless,
> and just indicates that the OS isn't wasting disk space. Or to put it
> another way, if you filled your disk with 1k files, fragmentation would
> be reported as 100%.
>
> (Well, that's the case with traditional UNIX filesystems like UFS;
> I should imagine xfs and raiserfs do things differently).
So what if you have the same 18k file that is stored as you said, in 2 8k
blocks and then one 2k chunk. Now you add more files to the system. Next
you add another 18k to the first file the next day. Continue for a month.
You would have something that looks like this right:
11111111 11111111 11222--- 22222222 22222222 3333333- ->
-------- -------- -------- -------- -------- -------
1 - 18k
2 - 19k
3 - 7k
I'm just guessing this is how the data would be written to disk. I don't
really know. So on to day 2 when I add 18k to file 1, the data would be
arranged on the drive platter as so?
11111111 11111111 11222--- 22222222 22222222 3333333- ->
11111111 11111111 11------
So I'm still getting fragmentation, just not nearly as bad as it is on a
FAT or NTFS machine correct?
I'm just trying to understand. I've heard the argument that "Linux does
not have any fragmentation to worry about". I just don't see how that is
possible on a desktop machine where lots of little files are modified
frequently.
More information about the fedora-list
mailing list