[Linux-cachefs] 3.0.3 64-bit Crash running fscache/cachefilesd

David Howells dhowells at redhat.com
Thu Oct 20 09:03:04 UTC 2011


Mark Moseley <moseleymark at gmail.com> wrote:

> Out of curiosity, did the dump of /proc/fs/fscache/stats show anything
> interesting?

Ah...  I missed the attachment.

Looking at the number of pages currently marked (the difference between the
following two numbers):

	Pages  : mrk=3438716 unc=3223887
	...
	Pages  : mrk=7660986 unc=7608076
	Pages  : mrk=7668510 unc=7618591

That isn't very high.  214829 at the beginning, dropping to 49919 at the end.
I suspect this means that a lot of NFS inodes now exist that aren't now cached
(the cache is under no requirement to actually cache anything if it feels it
lacks the resources just to prevent the system from grinding to a halt).

Was the last item in the list just before a crash?  I presume not from your
comments.

> One slightly interesting thing, unrelated to fscache: This box is a
> part of a pool of servers, serving the same web workloads. Another box
> in this same pool is running 3.0.4, up for about 23 days (vs 6 hrs),
> and the nfs_inode_cache is approximately 1/4 of the 3.1.0-rc8's,
> size-wise, 1/3 #ofobjects-wise; likewise dentry in a 3.0.4 box with a
> much longer uptime is about 1/9 the size (200k objs vs 1.8mil objects,
> 45megs vs 400megs) as the 3.1.0-rc8 box. Dunno if that's the result of
> VM improvements or a symptom of something leaking :)

It also depends on what the load consists of.  For instance someone running a
lot of find commands would cause the server to skew in favour of inodes over
data, but someone reading/writing big files would skew it the other way.

Do I take it the 3.0.4 box is not running fscache, but the 3.1.0-rc8 box is?

David




More information about the Linux-cachefs mailing list