[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAOH1cH=smX1tWLB6PwaRfcrOu5YErF3TGUDoRnLZGmxvGH9rAA@mail.gmail.com>
Date: Wed, 19 Oct 2011 16:15:20 -0700
From: Mark Moseley <moseleymark@...il.com>
To: David Howells <dhowells@...hat.com>
Cc: Linux filesystem caching discussion list
<linux-cachefs@...hat.com>, linux-kernel@...r.kernel.org
Subject: Re: [Linux-cachefs] 3.0.3 64-bit Crash running fscache/cachefilesd
On Wed, Oct 19, 2011 at 5:25 AM, David Howells <dhowells@...hat.com> wrote:
> Mark Moseley <moseleymark@...il.com> wrote:
>
>> Presumably it gets to bcull and stops storing but nothing's getting pruned.
>
> It is possible that all the objects are pinned by inodes just sitting there in
> the client's inode cache doing nothing. Thus fscache thinks they're in use.
I wasn't able to run 3.1.0-rc8 on that box over the weekend. I fired
it up this morning though. I left the existing cache in place. For a
few hours, it was culling (Used% on that partition got as low as 60%
at one point, i.e. 'brun'), but now it's back to stuck at 'bcull'.
Is there anything I can do to verify whether the objects are indeed
pinned? This is a pretty busy box. The nfs inode cache is quite large
(from slabtop):
900471 900377 99% 1.02K 300157 3 1200628K nfs_inode_cache
One slightly interesting thing, unrelated to fscache: This box is a
part of a pool of servers, serving the same web workloads. Another box
in this same pool is running 3.0.4, up for about 23 days (vs 6 hrs),
and the nfs_inode_cache is approximately 1/4 of the 3.1.0-rc8's,
size-wise, 1/3 #ofobjects-wise; likewise dentry in a 3.0.4 box with a
much longer uptime is about 1/9 the size (200k objs vs 1.8mil objects,
45megs vs 400megs) as the 3.1.0-rc8 box. Dunno if that's the result of
VM improvements or a symptom of something leaking :)
I don't see any other huge disparities, so I'm hoping it's the former.
Out of curiosity, did the dump of /proc/fs/fscache/stats show anything
interesting?
> Any more oopses from fscache or cachefiles?
The other day when I ran it, I didn't see any oopses after 16 or so
hours. I'll see if I can run it overnight and report back.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists