[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140725123646.GF10819@suse.de>
Date: Fri, 25 Jul 2014 13:36:46 +0100
From: Mel Gorman <mgorman@...e.de>
To: Vlastimil Babka <vbabka@...e.cz>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
David Rientjes <rientjes@...gle.com>,
linux-kernel@...r.kernel.org, Joonsoo Kim <iamjoonsoo.kim@....com>,
Michal Nazarewicz <mina86@...a86.com>,
Naoya Horiguchi <n-horiguchi@...jp.nec.com>,
Christoph Lameter <cl@...ux.com>,
Rik van Riel <riel@...hat.com>,
Minchan Kim <minchan@...nel.org>,
Zhang Yanfei <zhangyanfei@...fujitsu.com>
Subject: Re: [PATCH V4 11/15] mm, compaction: skip buddy pages by their order
in the migrate scanner
On Wed, Jul 16, 2014 at 03:48:19PM +0200, Vlastimil Babka wrote:
> The migration scanner skips PageBuddy pages, but does not consider their order
> as checking page_order() is generally unsafe without holding the zone->lock,
> and acquiring the lock just for the check wouldn't be a good tradeoff.
>
> Still, this could avoid some iterations over the rest of the buddy page, and
> if we are careful, the race window between PageBuddy() check and page_order()
> is small, and the worst thing that can happen is that we skip too much and miss
> some isolation candidates. This is not that bad, as compaction can already fail
> for many other reasons like parallel allocations, and those have much larger
> race window.
>
> This patch therefore makes the migration scanner obtain the buddy page order
> and use it to skip the whole buddy page, if the order appears to be in the
> valid range.
>
> It's important that the page_order() is read only once, so that the value used
> in the checks and in the pfn calculation is the same. But in theory the
> compiler can replace the local variable by multiple inlines of page_order().
> Therefore, the patch introduces page_order_unsafe() that uses ACCESS_ONCE to
> prevent this.
>
> Testing with stress-highalloc from mmtests shows a 15% reduction in number of
> pages scanned by migration scanner. The reduction is >60% with __GFP_NO_KSWAPD
> allocations, along with success rates better by few percent.
> This change is also a prerequisite for a later patch which is detecting when
> a cc->order block of pages contains non-buddy pages that cannot be isolated,
> and the scanner should thus skip to the next block immediately.
>
> Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
> Reviewed-by: Zhang Yanfei <zhangyanfei@...fujitsu.com>
> Acked-by: Minchan Kim <minchan@...nel.org>
Acked-by: Mel Gorman <mgorman@...e.de>
--
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists