[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20171102121706.21504-3-vbabka@suse.cz>
Date: Thu, 2 Nov 2017 13:17:06 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Mel Gorman <mgorman@...hsingularity.net>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Vlastimil Babka <vbabka@...e.cz>
Subject: [PATCH 3/3] mm, compaction: remove unneeded pageblock_skip_persistent() checks
Commit f3c931633a59 ("mm, compaction: persistently skip hugetlbfs pageblocks")
has introduced pageblock_skip_persistent() checks into migration and free
scanners, to make sure pageblocks that should be persistently skipped are
marked as such, regardless of the ignore_skip_hint flag.
Since the previous patch introduced a new no_set_skip_hint flag, the ignore flag
no longer prevents marking pageblocks as skipped. Therefore we can remove the
special cases. The relevant pageblocks will be marked as skipped by the common
logic which marks each pageblock where no page could be isolated. This makes the
code simpler.
Signed-off-by: Vlastimil Babka <vbabka@...e.cz>
---
mm/compaction.c | 18 +++---------------
1 file changed, 3 insertions(+), 15 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index a92860d89679..b557aac09e92 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -475,10 +475,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
if (PageCompound(page)) {
const unsigned int order = compound_order(page);
- if (pageblock_skip_persistent(page, order)) {
- set_pageblock_skip(page);
- blockpfn = end_pfn;
- } else if (likely(order < MAX_ORDER)) {
+ if (likely(order < MAX_ORDER)) {
blockpfn += (1UL << order) - 1;
cursor += (1UL << order) - 1;
}
@@ -800,10 +797,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
if (PageCompound(page)) {
const unsigned int order = compound_order(page);
- if (pageblock_skip_persistent(page, order)) {
- set_pageblock_skip(page);
- low_pfn = end_pfn;
- } else if (likely(order < MAX_ORDER))
+ if (likely(order < MAX_ORDER))
low_pfn += (1UL << order) - 1;
goto isolate_fail;
}
@@ -866,13 +860,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
* is safe to read and it's 0 for tail pages.
*/
if (unlikely(PageCompound(page))) {
- const unsigned int order = compound_order(page);
-
- if (pageblock_skip_persistent(page, order)) {
- set_pageblock_skip(page);
- low_pfn = end_pfn;
- } else
- low_pfn += (1UL << order) - 1;
+ low_pfn += (1UL << compound_order(page)) - 1;
goto isolate_fail;
}
}
--
2.14.3
Powered by blists - more mailing lists