[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1a4548bd-a9b3-e6c0-7b4f-e75b5e4f4cbd@suse.cz>
Date: Thu, 13 Oct 2016 13:46:05 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Joonsoo Kim <iamjoonsoo.kim@....com>
Cc: Johannes Weiner <hannes@...xchg.org>,
Mel Gorman <mgorman@...hsingularity.net>,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
kernel-team@...com
Subject: Re: [RFC 4/4] mm, page_alloc: disallow migratetype fallback in
fastpath
On 10/13/2016 09:58 AM, Joonsoo Kim wrote:
> On Thu, Sep 29, 2016 at 11:05:48PM +0200, Vlastimil Babka wrote:
>> The previous patch has adjusted async compaction so that it helps against
>> longterm fragmentation when compacting for a non-MOVABLE high-order allocation.
>> The goal of this patch is to force such allocations go through compaction
>> once before being allowed to fallback to a pageblock of different migratetype
>> (e.g. MOVABLE). In contexts where compaction is not allowed (and for order-0
>> allocations), this delayed fallback possibility can still help by trying a
>> different zone where fallback might not be needed and potentially waking up
>> kswapd earlier.
>
> Hmm... can we justify this compaction overhead in case of that there is
> high order freepages in other migratetype pageblock? There is no guarantee
> that longterm fragmentation happens and it affects the system
> peformance.
Yeah, I hoped testing would show whether this makes any difference, and
what the overhead is, and then we can decide whether it's worth.
> And, it would easilly fail to compact in unmovable pageblock since
> there would not be migratable pages if everything works as our
> intended. So, I guess that checking it over and over doesn't help to
> reduce fragmentation and just increase latency of allocation.
The pageblock isolation_suitable heuristics of compaction should
mitigate rescanning blocks without success. We could also add a per-zone
flag that gets set during a fallback allocation event and cleared by
finished compaction, or something.
> Thanks.
>
Powered by blists - more mailing lists