[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

RE: Filesystem fragmentation and scatter-gather DMA



Ric Wheeler wrote:

> There are certainly advantages to doing the read ahead (and coalescing)
> at the different layers. For example, a file system can do predictive
> read ahead across the non-contiguous chunks of a single file while the
> IO layer can coalesce multiple write or read commands on the same host
> and a multi-ported drive can do the same for multiple hosts.

If the file system does predictive read-ahead, and the data is not used, the
penalty will be *much* larger if the predictive read-ahead required an extra
seek than if it didn't. This is one of the biggest ways that fragmentation
hurts performance. The other is if the disk does read-ahead and the next
chunk of data in the file was needed, but wasn't read by the disk because of
fragmentation.

> > We might disagree on how bad the performance hit is, but I'm really
> > trying to prevent non-technical people from panicking when they see
> > a fragmented filesystem (or file).

> I agree - most casual users will never see anything close to a
> performance issue until they have completely filled the file system. In
> that case, defragmentation will not be the real help.

I agree with this as well. The only significant differences I've seen with
disk defragmenters were in two cases:

1) The filesystem was close to full, and the defragmenter bought a bit of
extra time before something had to be done.

2) The defragmenter was smart enough to move frequenty-accessed files to the
fastest parts of the disk, and the disk had a large (20%) difference between
its fastest and slowest tracks.

Otherwise, it's a miniscule difference.

I'd love to see smarter disks with much larger caches so that the OS could
say to the disk "here's the data I need now, and here's what I might need
later".

DS



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]