[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170118164647.GB32495@cmpxchg.org>
Date: Wed, 18 Jan 2017 11:46:47 -0500
From: Johannes Weiner <hannes@...xchg.org>
To: Michal Hocko <mhocko@...nel.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>, Minchan Kim <minchan@...nel.org>,
Hillf Danton <hillf.zj@...baba-inc.com>, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>,
Michal Hocko <mhocko@...e.com>
Subject: Re: [PATCH 2/3] mm, vmscan: consider eligible zones in get_scan_count
On Tue, Jan 17, 2017 at 11:37:01AM +0100, Michal Hocko wrote:
> From: Michal Hocko <mhocko@...e.com>
>
> get_scan_count considers the whole node LRU size when
> - doing SCAN_FILE due to many page cache inactive pages
> - calculating the number of pages to scan
>
> in both cases this might lead to unexpected behavior especially on 32b
> systems where we can expect lowmem memory pressure very often.
>
> A large highmem zone can easily distort SCAN_FILE heuristic because
> there might be only few file pages from the eligible zones on the node
> lru and we would still enforce file lru scanning which can lead to
> trashing while we could still scan anonymous pages.
>
> The later use of lruvec_lru_size can be problematic as well. Especially
> when there are not many pages from the eligible zones. We would have to
> skip over many pages to find anything to reclaim but shrink_node_memcg
> would only reduce the remaining number to scan by SWAP_CLUSTER_MAX
> at maximum. Therefore we can end up going over a large LRU many times
> without actually having chance to reclaim much if anything at all. The
> closer we are out of memory on lowmem zone the worse the problem will
> be.
>
> Fix this by filtering out all the ineligible zones when calculating the
> lru size for both paths and consider only sc->reclaim_idx zones.
>
> Acked-by: Minchan Kim <minchan@...nel.org>
> Acked-by: Hillf Danton <hillf.zj@...baba-inc.com>
> Signed-off-by: Michal Hocko <mhocko@...e.com>
Acked-by: Johannes Weiner <hannes@...xchg.org>
Powered by blists - more mailing lists