[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20160704103325.GD11498@techsingularity.net>
Date: Mon, 4 Jul 2016 11:33:25 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: Hillf Danton <hillf.zj@...baba-inc.com>
Cc: linux-kernel <linux-kernel@...r.kernel.org>, linux-mm@...ck.org
Subject: Re: [PATCH 04/31] mm, vmscan: begin reclaiming pages on a per-node
basis
On Mon, Jul 04, 2016 at 06:08:27PM +0800, Hillf Danton wrote:
> > @@ -2561,17 +2580,23 @@ static void shrink_zones(struct zonelist *zonelist, struct scan_control *sc)
> > * highmem pages could be pinning lowmem pages storing buffer_heads
> > */
> > orig_mask = sc->gfp_mask;
> > - if (buffer_heads_over_limit)
> > + if (buffer_heads_over_limit) {
> > sc->gfp_mask |= __GFP_HIGHMEM;
> > + sc->reclaim_idx = classzone_idx = gfp_zone(sc->gfp_mask);
> > + }
> >
> We need to push/pop ->reclaim_idx as ->gfp_mask handled?
>
I saw no harm in having one full reclaim attempt reclaiming from all
zones if buffer_heads_over_limit was triggered. If it fails, the page
allocator will loop again and reset the reclaim_idx.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists