[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LSU.2.00.1203191239570.3498@eggly.anvils>
Date: Mon, 19 Mar 2012 13:05:55 -0700 (PDT)
From: Hugh Dickins <hughd@...gle.com>
To: Rik van Riel <riel@...hat.com>
cc: Konstantin Khlebnikov <khlebnikov@...nvz.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Minchan Kim <minchan@...nel.org>, Mel Gorman <mgorman@...e.de>,
Johannes Weiner <jweiner@...hat.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Subject: Re: [PATCH] mm: forbid lumpy-reclaim in shrink_active_list()
On Mon, 19 Mar 2012, Rik van Riel wrote:
> On 03/19/2012 01:58 PM, Konstantin Khlebnikov wrote:
> > Rik van Riel wrote:
> > > On 03/19/2012 05:18 AM, Konstantin Khlebnikov wrote:
> > > > This patch reset reclaim mode in shrink_active_list() to
> > > > RECLAIM_MODE_SINGLE | RECLAIM_MODE_ASYNC.
> > > > (sync/async sign is used only in shrink_page_list and does not affect
> > > > shrink_active_list)
> > > >
> > > > Currenly shrink_active_list() sometimes works in lumpy-reclaim mode,
> > > > if RECLAIM_MODE_LUMPYRECLAIM left over from earlier
> > > > shrink_inactive_list().
> > > > Meanwhile, in age_active_anon() sc->reclaim_mode is totally zero.
> > > > So, current behavior is too complex and confusing, all this looks
> > > > like bug.
> > > >
> > > > In general, shrink_active_list() populate inactive list for next
> > > > shrink_inactive_list().
> > > > Lumpy shring_inactive_list() isolate pages around choosen one from
> > > > both active and
> > > > inactive lists. So, there no reasons for lumpy-isolation in
> > > > shrink_active_list()
> > > >
> > > > Proposed-by: Hugh Dickins<hughd@...gle.com>
> > > > Link: https://lkml.org/lkml/2012/3/15/583
> > > > Signed-off-by: Konstantin Khlebnikov<khlebnikov@...nvz.org>
> > >
> > > Confirmed, this is already done by commit
> > > 26f5f2f1aea7687565f55c20d69f0f91aa644fb8 in the
> > > linux-next tree.
> > >
> >
> > No, your patch fix this problem only if CONFIG_COMPACTION=y
>
> True.
>
> It was done that way, because Mel explained to me that deactivating
> a whole chunk of active pages at once is a desired feature that makes
> it more likely that a whole contiguous chunk of pages will eventually
> reach the end of the inactive list.
I'm rather sceptical about this: is there a test which demonstrates
a useful effect of that kind?
Lumpy movement from active won't help a lumpy allocation this time,
because lumpy reclaim from inactive doesn't care which lru the
surrounding pages come from anyway - and I argue that lumpy movement
from active actually reduces the number of choices which lumpy
reclaim will have, if they do near the bottom of inactive together.
So if lumpy movement from active (miscategorizing physically adjacent
pages as inactive too) is actually useful (the miscategorization turning
out to have been a good bet, since they're not activated again before
they reach the bottom of the inactive), and a nice buddyable group of
pages is later reclaimed from the inactive list because of it (without
any need for lumpy reclaim that time), then wouldn't we want to be
doing it more?
It should not be done only when inactive_is_low coincides with reclaim
for a high-order allocation: we would want to note that there's a load
which is making high-order requests, and do lumpy movement from active
whenever replenishing inactive while such a load is in force.
If it does more good than harm; but I'm sceptical about that.
Hugh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists