lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 20 Oct 2011 10:03:04 +0100
From:	David Howells <dhowells@...hat.com>
To:	Mark Moseley <moseleymark@...il.com>
Cc:	dhowells@...hat.com,
	Linux filesystem caching discussion list 
	<linux-cachefs@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [Linux-cachefs] 3.0.3 64-bit Crash running fscache/cachefilesd

Mark Moseley <moseleymark@...il.com> wrote:

> Out of curiosity, did the dump of /proc/fs/fscache/stats show anything
> interesting?

Ah...  I missed the attachment.

Looking at the number of pages currently marked (the difference between the
following two numbers):

	Pages  : mrk=3438716 unc=3223887
	...
	Pages  : mrk=7660986 unc=7608076
	Pages  : mrk=7668510 unc=7618591

That isn't very high.  214829 at the beginning, dropping to 49919 at the end.
I suspect this means that a lot of NFS inodes now exist that aren't now cached
(the cache is under no requirement to actually cache anything if it feels it
lacks the resources just to prevent the system from grinding to a halt).

Was the last item in the list just before a crash?  I presume not from your
comments.

> One slightly interesting thing, unrelated to fscache: This box is a
> part of a pool of servers, serving the same web workloads. Another box
> in this same pool is running 3.0.4, up for about 23 days (vs 6 hrs),
> and the nfs_inode_cache is approximately 1/4 of the 3.1.0-rc8's,
> size-wise, 1/3 #ofobjects-wise; likewise dentry in a 3.0.4 box with a
> much longer uptime is about 1/9 the size (200k objs vs 1.8mil objects,
> 45megs vs 400megs) as the 3.1.0-rc8 box. Dunno if that's the result of
> VM improvements or a symptom of something leaking :)

It also depends on what the load consists of.  For instance someone running a
lot of find commands would cause the server to skew in favour of inodes over
data, but someone reading/writing big files would skew it the other way.

Do I take it the 3.0.4 box is not running fscache, but the 3.1.0-rc8 box is?

David

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ