[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1460456783-30996-11-git-send-email-mgorman@techsingularity.net>
Date: Tue, 12 Apr 2016 11:26:05 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: Andrew Morton <akpm@...ux-foundation.org>,
Linux-MM <linux-mm@...ck.org>
Cc: Rik van Riel <riel@...riel.com>, Vlastimil Babka <vbabka@...e.cz>,
Johannes Weiner <hannes@...xchg.org>,
LKML <linux-kernel@...r.kernel.org>,
Mel Gorman <mgorman@...hsingularity.net>
Subject: [PATCH 10/28] mm, vmscan: By default have direct reclaim only shrink once per node
Direct reclaim iterates over all zones in the zonelist and shrinking them
but this is in conflict with node-based reclaim. In the default case,
only shrink once per node.
Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
Acked-by: Johannes Weiner <hannes@...xchg.org>
---
mm/vmscan.c | 23 +++++++++++------------
1 file changed, 11 insertions(+), 12 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index ef1cfa835138..f0bb2412fc01 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2545,14 +2545,6 @@ static inline bool compaction_ready(struct zone *zone, int order)
* try to reclaim pages from zones which will satisfy the caller's allocation
* request.
*
- * We reclaim from a zone even if that zone is over high_wmark_pages(zone).
- * Because:
- * a) The caller may be trying to free *extra* pages to satisfy a higher-order
- * allocation or
- * b) The target zone may be at high_wmark_pages(zone) but the lower zones
- * must go *over* high_wmark_pages(zone) to satisfy the `incremental min'
- * zone defense algorithm.
- *
* If a zone is deemed to be full of pinned pages then just give it a light
* scan then give up on it.
*
@@ -2567,6 +2559,7 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc,
unsigned long nr_soft_scanned;
gfp_t orig_mask;
bool reclaimable = false;
+ pg_data_t *last_pgdat = NULL;
/*
* If the number of buffer_heads in the machine exceeds the maximum
@@ -2579,11 +2572,17 @@ static bool shrink_zones(struct zonelist *zonelist, struct scan_control *sc,
for_each_zone_zonelist_nodemask(zone, z, zonelist,
classzone_idx, sc->nodemask) {
- if (!populated_zone(zone)) {
- sc->reclaim_idx--;
- classzone_idx--;
+ BUG_ON(!populated_zone(zone));
+
+ /*
+ * Shrink each node in the zonelist once. If the zonelist is
+ * ordered by zone (not the default) then a node may be
+ * shrunk multiple times but in that case the user prefers
+ * lower zones being preserved
+ */
+ if (zone->zone_pgdat == last_pgdat)
continue;
- }
+ last_pgdat = zone->zone_pgdat;
/*
* Take care memory controller reclaiming has small influence
--
2.6.4
Powered by blists - more mailing lists