[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160518144052.GH2527@techsingularity.net>
Date: Wed, 18 May 2016 15:40:52 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: Michal Hocko <mhocko@...nel.org>
Cc: Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Rik van Riel <riel@...hat.com>,
David Rientjes <rientjes@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [RFC 13/13] mm, compaction: fix and improve watermark handling
On Wed, May 18, 2016 at 04:27:53PM +0200, Michal Hocko wrote:
> > > > - __compaction_suitable() then checks the low watermark plus a (2 << order) gap
> > > > to decide if there's enough free memory to perform compaction. This check
> > >
> > > And this was a real head scratcher when I started looking into the
> > > compaction recently. Why do we need to be above low watermark to even
> > > start compaction. Compaction uses additional memory only for a short
> > > period of time and then releases the already migrated pages.
> > >
> >
> > Simply minimising the risk that compaction would deplete the entire
> > zone. Sure, it hands pages back shortly afterwards. At the time of the
> > initial prototype, page migration was severely broken and the system was
> > constantly crashing. The cautious checks were left in place after page
> > migration was fixed as there wasn't a compelling reason to remove them
> > at the time.
>
> OK, then moving to min_wmark + bias from low_wmark should work, right?
Yes. I did recall there was another reason but it's marginal. I didn't
want compaction isolation free pages to artifically push a process into
direct reclaim but given that we are likely under memory pressure at
that time anyway, it's unlikely that compaction is the sole reason
processes are entering direct reclaim.
--
Mel Gorman
SUSE Labs
Powered by blists - more mailing lists