[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20171102121706.21504-1-vbabka@suse.cz>
Date: Thu, 2 Nov 2017 13:17:04 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Mel Gorman <mgorman@...hsingularity.net>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Vlastimil Babka <vbabka@...e.cz>
Subject: [PATCH 1/3] mm, compaction: extend pageblock_skip_persistent() to all compound pages
The pageblock_skip_persistent() function checks for HugeTLB pages of pageblock
order. When clearing pageblock skip bits for compaction, the bits are not
cleared for such pageblocks, because they cannot contain base pages suitable
for migration, nor free pages to use as migration targets.
This optimization can be simply extended to all compound pages of order equal
or larger than pageblock order, because migrating such pages (if they support
it) cannot help sub-pageblock fragmentation. This includes THP's and also
gigantic HugeTLB pages, which the current implementation doesn't persistently
skip due to a strict pageblock_order equality check and not recognizing tail
pages.
While THP pages are generally less "persistent" than HugeTLB, we can still
expect that if a THP exists at the point of __reset_isolation_suitable(), it
will exist also during the subsequent compaction run. The time difference here
could be actually smaller than between a compaction run that sets a
(non-persistent) skip bit on a THP, and the next compaction run that observes
it.
Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
---
mm/compaction.c | 25 ++++++++++++++-----------
1 file changed, 14 insertions(+), 11 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index 445490ab2603..be7ab160f251 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -218,17 +218,21 @@ static void reset_cached_positions(struct zone *zone)
}
/*
- * Hugetlbfs pages should consistenly be skipped until updated by the hugetlb
- * subsystem. It is always pointless to compact pages of pageblock_order and
- * the free scanner can reconsider when no longer huge.
+ * Compound pages of >= pageblock_order should consistenly be skipped until
+ * released. It is always pointless to compact pages of such order (if they are
+ * migratable), and the pageblocks they occupy cannot contain any free pages.
*/
-static bool pageblock_skip_persistent(struct page *page, unsigned int order)
+static bool pageblock_skip_persistent(struct page *page)
{
- if (!PageHuge(page))
+ if (!PageCompound(page))
return false;
- if (order != pageblock_order)
- return false;
- return true;
+
+ page = compound_head(page);
+
+ if (compound_order(page) >= pageblock_order)
+ return true;
+
+ return false;
}
/*
@@ -255,7 +259,7 @@ static void __reset_isolation_suitable(struct zone *zone)
continue;
if (zone != page_zone(page))
continue;
- if (pageblock_skip_persistent(page, compound_order(page)))
+ if (pageblock_skip_persistent(page))
continue;
clear_pageblock_skip(page);
@@ -322,8 +326,7 @@ static inline bool isolation_suitable(struct compact_control *cc,
return true;
}
-static inline bool pageblock_skip_persistent(struct page *page,
- unsigned int order)
+static inline bool pageblock_skip_persistent(struct page *page)
{
return false;
}
--
2.14.3
Powered by blists - more mailing lists