[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Swap space



I wrote:
> This is true regardless of the media. It’s just that spinning disks are
> lousy at getting to the point where they can start transferring data.
> Your figures don’t measure that.

Aaron Konstam wrote:
> Yes it does. It is resented by the buffered read.

“Represented”, I presume, and it’s only measured once for many
megabytes. The buffered reads are sequential access¹ – where the disk
reads megabyte after megabyte continuously.

Random access is totally different, and is not measured by hdparm. With
random access, the OS reads (say) 4K from one part of the disk, and then
4K from another part of the disk. Moving from one part of a hard disk to
another takes time. If you read 250 separate 4K pages, you’ve still only
read 1 MB, but you’ve had that access time 250 times, and by now you’re
talking on the order of a second, even on fast disks.

> Several things need
> too be taken into account:
> 1. swap partitions are optimized for fast access.
Can’t go faster than the disk.

> 2. In most cases the swap data is read into a buffer from the disk
> before it is needed.. One gets segments of information at a time from
> the swap area. This optimization is done by the operating system

Got any references? As this seems *highly* improbable, unless the
pressure on memory has gone way down since the system started swapping.

If we’re in a swap situation, then the system needs more memory than it
actually has. Every page being read in means another page being swapped
out. The pages in swap are there for only two reasons:
1. the kernel made a decision that these pages were less likely to be
used in the near future than pages that it kept in memory;
2. since the kernel made that decision and swapped them out, those pages
have in fact not been needed.

In other words, the kernel got it right. Those pages it swapped out
*should* be on the swap. Reading in random pages from swap will make the
system replace pages that are more likely to be used with ones that are
less likely to be used. Doesn’t make sense, does it?

And they are likely to be random pages, since the system doesn’t evict
pages based on which processes are using them, but on how recently and
how frequently they are used.

The only thing that makes sense is reading those pages which need to be
in memory now (because a process has tried to access them) or soon, and
it’s difficult to tell which those will be.

In any case, as you say, most modern flash is not designed for heavy
use. This may change – the combination of reliable storage with fast
random access is valuable enough that there will large rewards for a
company that manages to provide it at a suitable price.

James.

¹ man hdparm:
    This measurement is an indication of how fast the drive  can
    sustain sequential data reads under Linux.
-- 
E-mail:     james@ | "Well of course it doesn't work, error #24 indicates
aprilcottage.co.uk | you've attached a wombat to your vacuum cleaner, which
                   | cannot possibly work. Please detach the wombat(s) and
                   | try again."                          -- Warren Block


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]