[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4ECAC963.8020906@redhat.com>
Date: Mon, 21 Nov 2011 16:57:55 -0500
From: Rik van Riel <riel@...hat.com>
To: Andrea Arcangeli <aarcange@...hat.com>
CC: linux-mm@...ck.org, Mel Gorman <mgorman@...e.de>,
Minchan Kim <minchan.kim@...il.com>, Jan Kara <jack@...e.cz>,
Andy Isaacson <adi@...apodia.org>,
Johannes Weiner <jweiner@...hat.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 8/8] Revert "vmscan: limit direct reclaim for higher order
allocations"
On 11/19/2011 02:54 PM, Andrea Arcangeli wrote:
> This reverts commit e0887c19b2daa140f20ca8104bdc5740f39dbb86.
>
> If reclaim runs with an high order allocation, it means compaction
> failed. That means something went wrong with compaction so we can't
> stop reclaim too. We can't assume it failed and was deferred only
> because of the too low watermarks in compaction_suitable, it may have
> failed for other reasons.
>
> Signed-off-by: Andrea Arcangeli<aarcange@...hat.com>
NACK
Reverting this can lead to the situation where every time
we have an attempted THP allocation, we free 4MB more
memory.
This has led to systems with 1/4 to 1/3 of all memory free
and pushed to swap, while the system continues with swapout
activity.
The thrashing this causes can be a factor 10 or worse
performance penalty. Failing a THP allocation is merely
a 10-20% performance penalty, which is not as much of an
issue.
We can move the threshold at which we skip pageout to be a
little higher (to give compaction more space to work with),
and even call shrink_slab when we skip other reclaiming
(because slab cannot be moved by compaction), but whatever
we do we do need to ensure that we never reclaim an unreasonable
amount of memory and end up pushing the working set into swap.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists