[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210712030701.4000097-136-willy@infradead.org>
Date: Mon, 12 Jul 2021 04:06:59 +0100
From: "Matthew Wilcox (Oracle)" <willy@...radead.org>
To: linux-kernel@...r.kernel.org
Cc: "Matthew Wilcox (Oracle)" <willy@...radead.org>,
linux-mm@...ck.org, linux-fsdevel@...r.kernel.org
Subject: [PATCH v13 135/137] mm/vmscan: Optimise shrink_page_list for smaller THPs
A THP which is smaller than a PMD does not need to do the extra work
in try_to_unmap() of trying to split a PMD entry.
Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
---
mm/vmscan.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 8b17e46dbf32..433956675107 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1496,7 +1496,8 @@ static unsigned int shrink_page_list(struct list_head *page_list,
enum ttu_flags flags = TTU_BATCH_FLUSH;
bool was_swapbacked = PageSwapBacked(page);
- if (unlikely(PageTransHuge(page)))
+ if (PageTransHuge(page) &&
+ thp_order(page) >= HPAGE_PMD_ORDER)
flags |= TTU_SPLIT_HUGE_PMD;
try_to_unmap(page, flags);
--
2.30.2
Powered by blists - more mailing lists