[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <02f101d1c47c$b4bae0f0$1e30a2d0$@alibaba-inc.com>
Date: Sun, 12 Jun 2016 15:33:25 +0800
From: "Hillf Danton" <hillf.zj@...baba-inc.com>
To: "'Mel Gorman'" <mgorman@...hsingularity.net>
Cc: "linux-kernel" <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
Subject: Re: [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node basis
> @@ -3207,15 +3228,14 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
> sc.may_writepage = 1;
>
> /*
> - * Now scan the zone in the dma->highmem direction, stopping
> - * at the last zone which needs scanning.
> - *
> - * We do this because the page allocator works in the opposite
> - * direction. This prevents the page allocator from allocating
> - * pages behind kswapd's direction of progress, which would
> - * cause too much scanning of the lower zones.
> + * Continue scanning in the highmem->dma direction stopping at
> + * the last zone which needs scanning. This may reclaim lowmem
> + * pages that are not necessary for zone balancing but it
> + * preserves LRU ordering. It is assumed that the bulk of
> + * allocation requests can use arbitrary zones with the
> + * possible exception of big highmem:lowmem configurations.
> */
> - for (i = 0; i <= end_zone; i++) {
> + for (i = end_zone; i >= end_zone; i--) {
s/i >= end_zone;/i >= 0;/ ?
> struct zone *zone = pgdat->node_zones + i;
>
> if (!populated_zone(zone))
Powered by blists - more mailing lists