[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.10.1607111556580.107663@chino.kir.corp.google.com>
Date: Mon, 11 Jul 2016 16:01:52 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Joonsoo Kim <iamjoonsoo.kim@....com>
cc: Vlastimil Babka <vbabka@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...hsingularity.net>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [patch] mm, compaction: make sure freeing scanner isn't persistently
expensive
On Thu, 30 Jun 2016, Joonsoo Kim wrote:
> We need to find a root cause of this problem, first.
>
> I guess that this problem would happen when isolate_freepages_block()
> early stop due to watermark check (if your patch is applied to your
> kernel). If scanner meets, cached pfn will be reset and your patch
> doesn't have any effect. So, I guess that scanner doesn't meet.
>
If the scanners meet, we should rely on deferred compaction to suppress
further attempts in the near future. This is outside the scope of this
fix.
> We enter the compaction with enough free memory so stop in
> isolate_freepages_block() should be unlikely event but your number
> shows that it happens frequently?
>
It's not the only reason why freepages will be returned to the buddy
allocator: if locks become contended because we are spending too much time
compacting memory, we can persistently get free pages returned to the end
of the zone and then repeatedly iterate >100GB of memory on every call to
isolate_freepages(), which makes its own contended checks fire more often.
This patch is only an attempt to prevent lenghty iterations when we have
recently scanned the memory and found freepages to not be isolatable.
> In addition, I worry that your previous patch that makes
> isolate_freepages_block() stop when watermark doesn't meet would cause
> compaction non-progress. Amount of free memory can be flutuated so
> watermark fail would be temporaral. We need to break compaction in
> this case? It would decrease compaction success rate if there is a
> memory hogger in parallel. Any idea?
>
In my opinion, which I think is quite well known by now, the compaction
freeing scanner shouldn't be checking _any_ watermark. The end result is
that we're migrating memory, not allocating additional memory; determining
if compaction should be done is best left lower on the stack.
Powered by blists - more mailing lists