[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110427235658.GJ12436@dastard>
Date: Thu, 28 Apr 2011 09:56:58 +1000
From: Dave Chinner <david@...morbit.com>
To: Minchan Kim <minchan.kim@...il.com>
Cc: Christian Kujau <lists@...dbynature.de>,
LKML <linux-kernel@...r.kernel.org>, xfs@....sgi.com
Subject: Re: 2.6.39-rc4+: oom-killer busy killing tasks
On Thu, Apr 28, 2011 at 08:16:29AM +0900, Minchan Kim wrote:
> On Wed, Apr 27, 2011 at 7:28 PM, Dave Chinner <david@...morbit.com> wrote:
> > On Wed, Apr 27, 2011 at 12:46:51AM -0700, Christian Kujau wrote:
> >> On Wed, 27 Apr 2011 at 12:26, Dave Chinner wrote:
> >> > What this shows is that VFS inode cache memory usage increases until
> >> > about the 550 sample mark before the VM starts to reclaim it with
> >> > extreme prejudice. At that point, I'd expect the XFS inode cache to
> >> > then shrink, and it doesn't. I've got no idea why the either the
> >>
> >> Do you remember any XFS changes past 2.6.38 that could be related to
> >> something like this?
> >
> > There's plenty of changes that coul dbe the cause - we've changed
> > the inode reclaim to run in the background out of a workqueue as
> > well as via the shrinker, so it could even be workqueue starvation
> > causing the the problem...
>
> RCU free starvation is another possibility?
> https://lkml.org/lkml/2011/4/25/124
You know, I've been waching that thread with interest, but it didn't
seem to be related. However, now that I go look at the config file
provided, I see:
CONFIG_TINY_RCU=y
# CONFIG_SMP is not set
CONFIG_PREEMPT_NONE=y
which means it probably is the same rcu free starvation problem as
reported in that thread.
Cheers,
Dave.
--
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists