[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1372250364-20640-3-git-send-email-mgorman@suse.de>
Date: Wed, 26 Jun 2013 13:39:24 +0100
From: Mel Gorman <mgorman@...e.de>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Jiri Slaby <jslaby@...e.cz>,
Valdis Kletnieks <Valdis.Kletnieks@...edu>,
Rik van Riel <riel@...hat.com>,
Zlatko Calusic <zcalusic@...sync.net>,
Johannes Weiner <hannes@...xchg.org>,
dormando <dormando@...ia.net>, Michal Hocko <mhocko@...e.cz>,
Jan Kara <jack@...e.cz>, Dave Chinner <david@...morbit.com>,
Kamezawa Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Linux-FSDevel <linux-fsdevel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>, Mel Gorman <mgorman@...e.de>
Subject: [PATCH 2/2] mm: vmscan: Do not scale writeback pages when deciding whether to set ZONE_WRITEBACK
After the patch "mm: vmscan: Flatten kswapd priority loop" was merged the
scanning priority of kswapd changed. The priority now raises until it is
scanning enough pages to meet the high watermark. shrink_inactive_list sets
ZONE_WRITEBACK if a number of pages were encountered under writeback but
this value is scaled based on the priority. As kswapd frequently scans with
a higher priority now it is relatively easy to set ZONE_WRITEBACK. This
patch removes the scaling and treates writeback pages similar to how it
treats unqueued dirty pages and congested pages. The user-visible effect
should be that kswapd will writeback fewer pages from reclaim context.
Signed-off-by: Mel Gorman <mgorman@...e.de>
---
mm/vmscan.c | 16 +---------------
1 file changed, 1 insertion(+), 15 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 65f2fbea..f677780 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1477,25 +1477,11 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
* as there is no guarantee the dirtying process is throttled in the
* same way balance_dirty_pages() manages.
*
- * This scales the number of dirty pages that must be under writeback
- * before a zone gets flagged ZONE_WRITEBACK. It is a simple backoff
- * function that has the most effect in the range DEF_PRIORITY to
- * DEF_PRIORITY-2 which is the priority reclaim is considered to be
- * in trouble and reclaim is considered to be in trouble.
- *
- * DEF_PRIORITY 100% isolated pages must be PageWriteback to throttle
- * DEF_PRIORITY-1 50% must be PageWriteback
- * DEF_PRIORITY-2 25% must be PageWriteback, kswapd in trouble
- * ...
- * DEF_PRIORITY-6 For SWAP_CLUSTER_MAX isolated pages, throttle if any
- * isolated page is PageWriteback
- *
* Once a zone is flagged ZONE_WRITEBACK, kswapd will count the number
* of pages under pages flagged for immediate reclaim and stall if any
* are encountered in the nr_immediate check below.
*/
- if (nr_writeback && nr_writeback >=
- (nr_taken >> (DEF_PRIORITY - sc->priority)))
+ if (nr_writeback && nr_writeback == nr_taken)
zone_set_flag(zone, ZONE_WRITEBACK);
/*
--
1.8.1.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists