[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160614144716.GC1868@techsingularity.net>
Date: Tue, 14 Jun 2016 15:47:16 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: Hillf Danton <hillf.zj@...baba-inc.com>
Cc: linux-kernel <linux-kernel@...r.kernel.org>, linux-mm@...ck.org
Subject: Re: [PATCH 04/27] mm, vmscan: Begin reclaiming pages on a per-node
basis
On Sun, Jun 12, 2016 at 03:33:25PM +0800, Hillf Danton wrote:
> > @@ -3207,15 +3228,14 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)
> > sc.may_writepage = 1;
> >
> > /*
> > - * Now scan the zone in the dma->highmem direction, stopping
> > - * at the last zone which needs scanning.
> > - *
> > - * We do this because the page allocator works in the opposite
> > - * direction. This prevents the page allocator from allocating
> > - * pages behind kswapd's direction of progress, which would
> > - * cause too much scanning of the lower zones.
> > + * Continue scanning in the highmem->dma direction stopping at
> > + * the last zone which needs scanning. This may reclaim lowmem
> > + * pages that are not necessary for zone balancing but it
> > + * preserves LRU ordering. It is assumed that the bulk of
> > + * allocation requests can use arbitrary zones with the
> > + * possible exception of big highmem:lowmem configurations.
> > */
> > - for (i = 0; i <= end_zone; i++) {
> > + for (i = end_zone; i >= end_zone; i--) {
>
> s/i >= end_zone;/i >= 0;/ ?
>
Yes although it's eliminated by "mm, vmscan: Make kswapd reclaim in
terms of nodes"
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists