[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251201174627.23295-14-npache@redhat.com>
Date: Mon, 1 Dec 2025 10:46:24 -0700
From: Nico Pache <npache@...hat.com>
To: linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org,
linux-mm@...ck.org,
linux-doc@...r.kernel.org
Cc: david@...hat.com,
ziy@...dia.com,
baolin.wang@...ux.alibaba.com,
lorenzo.stoakes@...cle.com,
Liam.Howlett@...cle.com,
ryan.roberts@....com,
dev.jain@....com,
corbet@....net,
rostedt@...dmis.org,
mhiramat@...nel.org,
mathieu.desnoyers@...icios.com,
akpm@...ux-foundation.org,
baohua@...nel.org,
willy@...radead.org,
peterx@...hat.com,
wangkefeng.wang@...wei.com,
usamaarif642@...il.com,
sunnanyong@...wei.com,
vishal.moola@...il.com,
thomas.hellstrom@...ux.intel.com,
yang@...amperecomputing.com,
kas@...nel.org,
aarcange@...hat.com,
raquini@...hat.com,
anshuman.khandual@....com,
catalin.marinas@....com,
tiwai@...e.de,
will@...nel.org,
dave.hansen@...ux.intel.com,
jack@...e.cz,
cl@...two.org,
jglisse@...gle.com,
surenb@...gle.com,
zokeefe@...gle.com,
hannes@...xchg.org,
rientjes@...gle.com,
mhocko@...e.com,
rdunlap@...radead.org,
hughd@...gle.com,
richard.weiyang@...il.com,
lance.yang@...ux.dev,
vbabka@...e.cz,
rppt@...nel.org,
jannh@...gle.com,
pfalcato@...e.de
Subject: [PATCH v13 mm-new 13/16] khugepaged: Introduce mTHP collapse support
Enable khugepaged to collapse to mTHP orders. This patch implements the
main scanning logic using a bitmap to track occupied pages and a stack
structure that allows us to find optimal collapse sizes.
Previous to this patch, PMD collapse had 3 main phases, a light weight
scanning phase (mmap_read_lock) that determines a potential PMD
collapse, a alloc phase (mmap unlocked), then finally heavier collapse
phase (mmap_write_lock).
To enabled mTHP collapse we make the following changes:
During PMD scan phase, track occupied pages in a bitmap. When mTHP
orders are enabled, we remove the restriction of max_ptes_none during the
scan phase to avoid missing potential mTHP collapse candidates. Once we
have scanned the full PMD range and updated the bitmap to track occupied
pages, we use the bitmap to find the optimal mTHP size.
Implement collapse_scan_bitmap() to perform binary recursion on the bitmap
and determine the best eligible order for the collapse. A stack structure
is used instead of traditional recursion to manage the search. The
algorithm recursively splits the bitmap into smaller chunks to find the
highest order mTHPs that satisfy the collapse criteria. We start by
attempting the PMD order, then moved on the consecutively lower orders
(mTHP collapse). The stack maintains a pair of variables (offset, order),
indicating the number of PTEs from the start of the PMD, and the order of
the potential collapse candidate.
The algorithm for consuming the bitmap works as such:
1) push (0, HPAGE_PMD_ORDER) onto the stack
2) pop the stack
3) check if the number of set bits in that (offset,order) pair
statisfy the max_ptes_none threshold for that order
4) if yes, attempt collapse
5) if no (or collapse fails), push two new stack items representing
the left and right halves of the current bitmap range, at the
next lower order
6) repeat at step (2) until stack is empty.
Below is a diagram representing the algorithm and stack items:
offset mid_offset
| |
| |
v v
____________________________________
| PTE Page Table |
--------------------------------------
<-------><------->
order-1 order-1
We currently only support mTHP collapse for max_ptes_none values of 0
and HPAGE_PMD_NR - 1. resulting in the following behavior:
- max_ptes_none=0: Never introduce new empty pages during collapse
- max_ptes_none=HPAGE_PMD_NR-1: Always try collapse to the highest
available mTHP order
Any other max_ptes_none value will emit a warning and skip mTHP collapse
attempts. There should be no behavior change for PMD collapse.
Once we determine what mTHP sizes fits best in that PMD range a collapse
is attempted. A minimum collapse order of 2 is used as this is the lowest
order supported by anon memory as defined by THP_ORDERS_ALL_ANON.
mTHP collapses reject regions containing swapped out or shared pages.
This is because adding new entries can lead to new none pages, and these
may lead to constant promotion into a higher order (m)THP. A similar
issue can occur with "max_ptes_none > HPAGE_PMD_NR/2" due to a collapse
introducing at least 2x the number of pages, and on a future scan will
satisfy the promotion condition once again. This issue is prevented via
the collapse_max_ptes_none() function which imposes the max_ptes_none
restrictions above.
Currently madv_collapse is not supported and will only attempt PMD
collapse.
We can also remove the check for is_khugepaged inside the PMD scan as
the collapse_max_ptes_none() function handles this logic now.
Reviewed-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
Tested-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
Signed-off-by: Nico Pache <npache@...hat.com>
---
mm/khugepaged.c | 183 +++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 175 insertions(+), 8 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 33b70ca070b4..fcb927cd5456 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -92,6 +92,32 @@ static DEFINE_READ_MOSTLY_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS);
static struct kmem_cache *mm_slot_cache __ro_after_init;
+#define KHUGEPAGED_MIN_MTHP_ORDER 2
+/*
+ * The maximum number of mTHP ranges that can be stored on the stack.
+ * This is calculated based on the number of PTE entries in a PTE page table
+ * and the minimum mTHP order.
+ *
+ * ilog2(MAX_PTRS_PER_PTE) is log2 of the maximum number of PTE entries.
+ * This gives you the PMD_ORDER, and is needed in place of HPAGE_PMD_ORDER due
+ * to restrictions of some architectures (ie ppc64le).
+ *
+ * At most there will be 1 << (PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER) mTHP ranges
+ */
+#define MTHP_STACK_SIZE (1UL << (ilog2(MAX_PTRS_PER_PTE) - KHUGEPAGED_MIN_MTHP_ORDER))
+
+/*
+ * Defines a range of PTE entries in a PTE page table which are being
+ * considered for (m)THP collapse.
+ *
+ * @offset: the offset of the first PTE entry in a PMD range.
+ * @order: the order of the PTE entries being considered for collapse.
+ */
+struct mthp_range {
+ u16 offset;
+ u8 order;
+};
+
struct collapse_control {
bool is_khugepaged;
@@ -100,6 +126,11 @@ struct collapse_control {
/* nodemask for allocation fallback */
nodemask_t alloc_nmask;
+
+ /* bitmap used for mTHP collapse */
+ DECLARE_BITMAP(mthp_bitmap, MAX_PTRS_PER_PTE);
+ DECLARE_BITMAP(mthp_bitmap_mask, MAX_PTRS_PER_PTE);
+ struct mthp_range mthp_bitmap_stack[MTHP_STACK_SIZE];
};
/**
@@ -1375,6 +1406,121 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long start_addr,
return result;
}
+static void mthp_stack_push(struct collapse_control *cc, int *stack_size,
+ u16 offset, u8 order)
+{
+ const int size = *stack_size;
+ struct mthp_range *stack = &cc->mthp_bitmap_stack[size];
+
+ VM_WARN_ON_ONCE(size >= MTHP_STACK_SIZE);
+ stack->order = order;
+ stack->offset = offset;
+ (*stack_size)++;
+}
+
+static struct mthp_range mthp_stack_pop(struct collapse_control *cc, int *stack_size)
+{
+ const int size = *stack_size;
+
+ VM_WARN_ON_ONCE(size <= 0);
+ (*stack_size)--;
+ return cc->mthp_bitmap_stack[size - 1];
+}
+
+static unsigned int mthp_nr_occupied_pte_entries(struct collapse_control *cc,
+ u16 offset, unsigned long nr_pte_entries)
+{
+ bitmap_zero(cc->mthp_bitmap_mask, HPAGE_PMD_NR);
+ bitmap_set(cc->mthp_bitmap_mask, offset, nr_pte_entries);
+ return bitmap_weight_and(cc->mthp_bitmap, cc->mthp_bitmap_mask, HPAGE_PMD_NR);
+}
+
+/*
+ * mthp_collapse() consumes the bitmap that is generated during
+ * collapse_scan_pmd() to determine what regions and mTHP orders fit best.
+ *
+ * Each bit in cc->mthp_bitmap represents a single occupied (!none/zero) page.
+ * A stack structure cc->mthp_bitmap_stack is used to check different regions
+ * of the bitmap for collapse eligibility. The stack maintains a pair of
+ * variables (offset, order), indicating the number of PTEs from the start of
+ * the PMD, and the order of the potential collapse candidate respectively. We
+ * start at the PMD order and check if it is eligible for collapse; if not, we
+ * add two entries to the stack at a lower order to represent the left and right
+ * halves of the PTE page table we are examining.
+ *
+ * offset mid_offset
+ * | |
+ * | |
+ * v v
+ * --------------------------------------
+ * | cc->mthp_bitmap |
+ * --------------------------------------
+ * <-------><------->
+ * order-1 order-1
+ *
+ * For each of these, we determine how many PTE entries are occupied in the
+ * range of PTE entries we propose to collapse, then we compare this to a
+ * threshold number of PTE entries which would need to be occupied for a
+ * collapse to be permitted at that order (accounting for max_ptes_none).
+
+ * If a collapse is permitted, we attempt to collapse the PTE range into a
+ * mTHP.
+ */
+static int mthp_collapse(struct mm_struct *mm, unsigned long address,
+ int referenced, int unmapped, struct collapse_control *cc,
+ bool *mmap_locked, unsigned long enabled_orders)
+{
+ unsigned int max_ptes_none, nr_occupied_ptes;
+ struct mthp_range range;
+ unsigned long collapse_address;
+ int collapsed = 0, stack_size = 0;
+ unsigned long nr_pte_entries;
+ u16 offset;
+ u8 order;
+
+ mthp_stack_push(cc, &stack_size, 0, HPAGE_PMD_ORDER);
+
+ while (stack_size > 0) {
+ range = mthp_stack_pop(cc, &stack_size);
+ order = range.order;
+ offset = range.offset;
+ nr_pte_entries = 1UL << order;
+
+ if (!test_bit(order, &enabled_orders))
+ goto next_order;
+
+ max_ptes_none = collapse_max_ptes_none(order, !cc->is_khugepaged);
+
+ if (max_ptes_none == -EINVAL)
+ return collapsed;
+
+ nr_occupied_ptes = mthp_nr_occupied_pte_entries(cc, offset, nr_pte_entries);
+
+ if (nr_occupied_ptes >= nr_pte_entries - max_ptes_none) {
+ int ret;
+
+ collapse_address = address + offset * PAGE_SIZE;
+ ret = collapse_huge_page(mm, collapse_address, referenced,
+ unmapped, cc, mmap_locked,
+ order);
+ if (ret == SCAN_SUCCEED) {
+ collapsed += nr_pte_entries;
+ continue;
+ }
+ }
+
+next_order:
+ if (order > KHUGEPAGED_MIN_MTHP_ORDER) {
+ const u8 next_order = order - 1;
+ const u16 mid_offset = offset + (nr_pte_entries / 2);
+
+ mthp_stack_push(cc, &stack_size, mid_offset, next_order);
+ mthp_stack_push(cc, &stack_size, offset, next_order);
+ }
+ }
+ return collapsed;
+}
+
static int collapse_scan_pmd(struct mm_struct *mm,
struct vm_area_struct *vma,
unsigned long start_addr, bool *mmap_locked,
@@ -1382,11 +1528,15 @@ static int collapse_scan_pmd(struct mm_struct *mm,
{
pmd_t *pmd;
pte_t *pte, *_pte;
+ int i;
int result = SCAN_FAIL, referenced = 0;
- int none_or_zero = 0, shared = 0;
+ int none_or_zero = 0, shared = 0, nr_collapsed = 0;
struct page *page = NULL;
+ unsigned int max_ptes_none;
struct folio *folio = NULL;
unsigned long addr;
+ unsigned long enabled_orders;
+ bool full_scan = true;
spinlock_t *ptl;
int node = NUMA_NO_NODE, unmapped = 0;
@@ -1396,22 +1546,34 @@ static int collapse_scan_pmd(struct mm_struct *mm,
if (result != SCAN_SUCCEED)
goto out;
+ bitmap_zero(cc->mthp_bitmap, HPAGE_PMD_NR);
memset(cc->node_load, 0, sizeof(cc->node_load));
nodes_clear(cc->alloc_nmask);
+
+ enabled_orders = collapse_allowable_orders(vma, vma->vm_flags, cc->is_khugepaged);
+
+ /*
+ * If PMD is the only enabled order, enforce max_ptes_none, otherwise
+ * scan all pages to populate the bitmap for mTHP collapse.
+ */
+ if (cc->is_khugepaged && enabled_orders == BIT(HPAGE_PMD_ORDER))
+ full_scan = false;
+ max_ptes_none = collapse_max_ptes_none(HPAGE_PMD_ORDER, full_scan);
+
pte = pte_offset_map_lock(mm, pmd, start_addr, &ptl);
if (!pte) {
result = SCAN_NO_PTE_TABLE;
goto out;
}
- for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
- _pte++, addr += PAGE_SIZE) {
+ for (i = 0; i < HPAGE_PMD_NR; i++) {
+ _pte = pte + i;
+ addr = start_addr + i * PAGE_SIZE;
pte_t pteval = ptep_get(_pte);
if (pte_none_or_zero(pteval)) {
++none_or_zero;
if (!userfaultfd_armed(vma) &&
- (!cc->is_khugepaged ||
- none_or_zero <= khugepaged_max_ptes_none)) {
+ none_or_zero <= max_ptes_none) {
continue;
} else {
result = SCAN_EXCEED_NONE_PTE;
@@ -1479,6 +1641,8 @@ static int collapse_scan_pmd(struct mm_struct *mm,
}
}
+ /* Set bit for occupied pages */
+ bitmap_set(cc->mthp_bitmap, i, 1);
/*
* Record which node the original page is from and save this
* information to cc->node_load[].
@@ -1535,9 +1699,12 @@ static int collapse_scan_pmd(struct mm_struct *mm,
out_unmap:
pte_unmap_unlock(pte, ptl);
if (result == SCAN_SUCCEED) {
- result = collapse_huge_page(mm, start_addr, referenced,
- unmapped, cc, mmap_locked,
- HPAGE_PMD_ORDER);
+ nr_collapsed = mthp_collapse(mm, start_addr, referenced, unmapped,
+ cc, mmap_locked, enabled_orders);
+ if (nr_collapsed > 0)
+ result = SCAN_SUCCEED;
+ else
+ result = SCAN_FAIL;
}
out:
trace_mm_khugepaged_scan_pmd(mm, folio, referenced,
--
2.51.1
Powered by blists - more mailing lists