[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <544E12B5.5070008@suse.cz>
Date: Mon, 27 Oct 2014 10:39:01 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Joonsoo Kim <iamjoonsoo.kim@....com>
CC: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Minchan Kim <minchan@...nel.org>,
Mel Gorman <mgorman@...e.de>,
Michal Nazarewicz <mina86@...a86.com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Christoph Lameter <cl@...ux.com>,
Rik van Riel <riel@...hat.com>,
David Rientjes <rientjes@...gle.com>
Subject: Re: [PATCH 4/5] mm, compaction: always update cached scanner positions
On 10/27/2014 08:35 AM, Joonsoo Kim wrote:> On Tue, Oct 07, 2014 at
05:33:38PM +0200, Vlastimil Babka wrote:
>> Compaction caches the migration and free scanner positions between
compaction
>> invocations, so that the whole zone gets eventually scanned and there
is no
>> bias towards the initial scanner positions at the beginning/end of
the zone.
>>
>> The cached positions are continuously updated as scanners progress
and the
>> updating stops as soon as a page is successfully isolated. The reasoning
>> behind this is that a pageblock where isolation succeeded is likely
to succeed
>> again in near future and it should be worth revisiting it.
>>
>> However, the downside is that potentially many pages are rescanned
without
>> successful isolation. At worst, there might be a page where isolation
from LRU
>> succeeds but migration fails (potentially always). So upon
encountering this
>> page, cached position would always stop being updated for no good reason.
>> It might have been useful to let such page be rescanned with sync
compaction
>> after async one failed, but this is now handled by caching scanner
position
>> for async and sync mode separately since commit 35979ef33931 ("mm,
compaction:
>> add per-zone migration pfn cache for async compaction").
>
> Hmm... I'm not sure that this patch is good thing.
>
> In asynchronous compaction, compaction could be easily failed and
> isolated freepages are returned to the buddy. In this case, next
> asynchronous compaction would skip those returned freepages and
> both scanners could meet prematurely.
If migration fails, free pages now remain isolated until next migration
attempt, which should happen within the same compaction when it isolates
new migratepages - it won't fail completely just because of failed
migration. It might run out of time due to need_resched and then yeah,
some free pages might be skipped. That's some tradeoff but at least my
tests don't seem to show reduced success rates.
> And, I guess that pageblock skip feature effectively disable pageblock
> rescanning if there is no freepage during rescan.
If there's no freepage during rescan, then the cached free_pfn also
won't be pointed to the pageblock anymore. Regardless of pageblock skip
being set, there will not be second rescan. But there will still be the
first rescan to determine there are no freepages.
> This patch would
> eliminate effect of pageblock skip feature.
I don't think so (as explained above). Also if free pages were isolated
(and then returned and skipped over), the pageblock should remain
without skip bit, so after scanners meet and positions reset (which
doesn't go hand in hand with skip bit reset), the next round will skip
over the blocks without freepages and find quickly the blocks where free
pages were skipped in the previous round.
> IIUC, compaction logic assume that there are many temporary failure
> conditions. Retrying from others would reduce effect of this temporary
> failure so implementation looks as is.
The implementation of pfn caching was written at time when we did not
keep isolated free pages between migration attempts in a single
compaction run. And the idea of async compaction is to try with minimal
effort (thus latency), and if there's a failure, try somewhere else.
Making sure we don't skip anything doesn't seem productive.
> If what we want is scanning each page once in each epoch, we can
> implement compaction logic differently.
Well I'm open to suggestions :) Can't say the current set of heuristics
is straightforward to reason about.
> Please let me know if I'm missing something.
>
> Thanks.
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists