[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110530165546.GC5118@suse.de>
Date: Mon, 30 May 2011 17:55:46 +0100
From: Mel Gorman <mgorman@...e.de>
To: Mel Gorman <mel@....ul.ie>
Cc: Andrea Arcangeli <aarcange@...hat.com>, akpm@...ux-foundation.org,
Ury Stankevich <urykhy@...il.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org, stable@...nel.org
Subject: Re: [PATCH] mm: compaction: Abort compaction if too many pages are
isolated and caller is asynchronous
On Mon, May 30, 2011 at 04:37:49PM +0100, Mel Gorman wrote:
> > Or how do you explain this -1 value out of nr_isolated_file? Clearly
> > when that value goes to -1, compaction.c:too_many_isolated will hang,
> > I think we should fix the -1 value before worrying about the rest...
> >
> > grep nr_isolated_file zoneinfo-khugepaged
> > nr_isolated_file 1
> > nr_isolated_file 4294967295
>
> Can you point me at the thread that this file appears on and what the
> conditions were? If vmstat is going to -1, it is indeed a problem
> because it implies an imbalance in increments and decrements to the
> isolated counters.
Even with drift issues, -1 there should be "impossible". Assuming this
is a zoneinfo file, that figure is based on global_page_state() which
looks like
static inline unsigned long global_page_state(enum zone_stat_item item)
{
long x = atomic_long_read(&vm_stat[item]);
#ifdef CONFIG_SMP
if (x < 0)
x = 0;
#endif
return x;
}
So even if isolated counts were going negative for short periods of
time, the returned value should be 0. As this is an inline returning
unsigned long, and callers are using unsigned long, is there any
possibility the "if (x < 0)" is being optimised out? If you aware
of users reporting this problem (like the users in thread "iotop:
khugepaged at 99.99% (2.6.38.3)"), do you know if they had a particular
compiler in common?
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists