[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180308045810.8041-143-alexander.levin@microsoft.com>
Date: Thu, 8 Mar 2018 04:59:54 +0000
From: Sasha Levin <Alexander.Levin@...rosoft.com>
To: "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"stable@...r.kernel.org" <stable@...r.kernel.org>
CC: Johannes Weiner <hannes@...xchg.org>, Jia He <hejianet@...il.com>,
Mel Gorman <mgorman@...e.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Sasha Levin <Alexander.Levin@...rosoft.com>
Subject: [PATCH AUTOSEL for 4.9 143/190] mm: fix check for reclaimable pages
in PF_MEMALLOC reclaim throttling
From: Johannes Weiner <hannes@...xchg.org>
[ Upstream commit d450abd81b081d45adb12f303a07dd44b15eb1bc ]
PF_MEMALLOC direct reclaimers get throttled on a node when the sum of
all free pages in each zone fall below half the min watermark. During
the summation, we want to exclude zones that don't have reclaimables.
Checking the same pgdat over and over again doesn't make sense.
Fixes: 599d0c954f91 ("mm, vmscan: move LRU lists to node")
Link: http://lkml.kernel.org/r/20170228214007.5621-3-hannes@cmpxchg.org
Signed-off-by: Johannes Weiner <hannes@...xchg.org>
Acked-by: Hillf Danton <hillf.zj@...baba-inc.com>
Acked-by: Michal Hocko <mhocko@...e.com>
Cc: Jia He <hejianet@...il.com>
Cc: Mel Gorman <mgorman@...e.de>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Sasha Levin <alexander.levin@...rosoft.com>
---
mm/vmscan.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 4365de74bdb0..cdd5c3b5c357 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2841,8 +2841,10 @@ static bool allow_direct_reclaim(pg_data_t *pgdat)
for (i = 0; i <= ZONE_NORMAL; i++) {
zone = &pgdat->node_zones[i];
- if (!managed_zone(zone) ||
- pgdat_reclaimable_pages(pgdat) == 0)
+ if (!managed_zone(zone))
+ continue;
+
+ if (!zone_reclaimable_pages(zone))
continue;
pfmemalloc_reserve += min_wmark_pages(zone);
--
2.14.1
Powered by blists - more mailing lists