[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4F6774E8.2050202@redhat.com>
Date: Mon, 19 Mar 2012 14:03:20 -0400
From: Rik van Riel <riel@...hat.com>
To: Konstantin Khlebnikov <khlebnikov@...nvz.org>
CC: "linux-mm@...ck.org" <linux-mm@...ck.org>,
Andrew Morton <akpm@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Hugh Dickins <hughd@...gle.com>,
Minchan Kim <minchan@...nel.org>, Mel Gorman <mgorman@...e.de>,
Johannes Weiner <jweiner@...hat.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Subject: Re: [PATCH] mm: forbid lumpy-reclaim in shrink_active_list()
On 03/19/2012 01:58 PM, Konstantin Khlebnikov wrote:
> Rik van Riel wrote:
>> On 03/19/2012 05:18 AM, Konstantin Khlebnikov wrote:
>>> This patch reset reclaim mode in shrink_active_list() to
>>> RECLAIM_MODE_SINGLE | RECLAIM_MODE_ASYNC.
>>> (sync/async sign is used only in shrink_page_list and does not affect
>>> shrink_active_list)
>>>
>>> Currenly shrink_active_list() sometimes works in lumpy-reclaim mode,
>>> if RECLAIM_MODE_LUMPYRECLAIM left over from earlier
>>> shrink_inactive_list().
>>> Meanwhile, in age_active_anon() sc->reclaim_mode is totally zero.
>>> So, current behavior is too complex and confusing, all this looks
>>> like bug.
>>>
>>> In general, shrink_active_list() populate inactive list for next
>>> shrink_inactive_list().
>>> Lumpy shring_inactive_list() isolate pages around choosen one from
>>> both active and
>>> inactive lists. So, there no reasons for lumpy-isolation in
>>> shrink_active_list()
>>>
>>> Proposed-by: Hugh Dickins<hughd@...gle.com>
>>> Link: https://lkml.org/lkml/2012/3/15/583
>>> Signed-off-by: Konstantin Khlebnikov<khlebnikov@...nvz.org>
>>
>> Confirmed, this is already done by commit
>> 26f5f2f1aea7687565f55c20d69f0f91aa644fb8 in the
>> linux-next tree.
>>
>
> No, your patch fix this problem only if CONFIG_COMPACTION=y
True.
It was done that way, because Mel explained to me that deactivating
a whole chunk of active pages at once is a desired feature that makes
it more likely that a whole contiguous chunk of pages will eventually
reach the end of the inactive list.
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists