[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250702055742.102808-11-npache@redhat.com>
Date: Tue, 1 Jul 2025 23:57:37 -0600
From: Nico Pache <npache@...hat.com>
To: linux-mm@...ck.org,
linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org
Cc: david@...hat.com,
ziy@...dia.com,
baolin.wang@...ux.alibaba.com,
lorenzo.stoakes@...cle.com,
Liam.Howlett@...cle.com,
ryan.roberts@....com,
dev.jain@....com,
corbet@....net,
rostedt@...dmis.org,
mhiramat@...nel.org,
mathieu.desnoyers@...icios.com,
akpm@...ux-foundation.org,
baohua@...nel.org,
willy@...radead.org,
peterx@...hat.com,
wangkefeng.wang@...wei.com,
usamaarif642@...il.com,
sunnanyong@...wei.com,
vishal.moola@...il.com,
thomas.hellstrom@...ux.intel.com,
yang@...amperecomputing.com,
kirill.shutemov@...ux.intel.com,
aarcange@...hat.com,
raquini@...hat.com,
anshuman.khandual@....com,
catalin.marinas@....com,
tiwai@...e.de,
will@...nel.org,
dave.hansen@...ux.intel.com,
jack@...e.cz,
cl@...two.org,
jglisse@...gle.com,
surenb@...gle.com,
zokeefe@...gle.com,
hannes@...xchg.org,
rientjes@...gle.com,
mhocko@...e.com,
rdunlap@...radead.org
Subject: [PATCH v8 10/15] khugepaged: allow khugepaged to check all anonymous mTHP orders
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
We have now allowed mTHP collapse, but thp_vma_allowable_order() still only
checks if the PMD-sized mTHP is allowed to collapse. This prevents scanning
and collapsing of 64K mTHP when only 64K mTHP is enabled. Thus, we should
modify the checks to allow all large orders of anonymous mTHP.
Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
Signed-off-by: Nico Pache <npache@...hat.com>
---
mm/khugepaged.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 50e1d7ef7e6d..b96a7327b9c0 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -491,8 +491,11 @@ void khugepaged_enter_vma(struct vm_area_struct *vma,
{
if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) &&
hugepage_pmd_enabled()) {
- if (thp_vma_allowable_order(vma, vm_flags, TVA_ENFORCE_SYSFS,
- PMD_ORDER))
+ unsigned long orders = vma_is_anonymous(vma) ?
+ THP_ORDERS_ALL_ANON : BIT(PMD_ORDER);
+
+ if (thp_vma_allowable_orders(vma, vm_flags, TVA_ENFORCE_SYSFS,
+ orders))
__khugepaged_enter(vma->vm_mm);
}
}
@@ -2612,6 +2615,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
vma_iter_init(&vmi, mm, khugepaged_scan.address);
for_each_vma(vmi, vma) {
+ unsigned long orders = vma_is_anonymous(vma) ?
+ THP_ORDERS_ALL_ANON : BIT(PMD_ORDER);
unsigned long hstart, hend;
cond_resched();
@@ -2619,8 +2624,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
progress++;
break;
}
- if (!thp_vma_allowable_order(vma, vma->vm_flags,
- TVA_ENFORCE_SYSFS, PMD_ORDER)) {
+ if (!thp_vma_allowable_orders(vma, vma->vm_flags,
+ TVA_ENFORCE_SYSFS, orders)) {
skip:
progress++;
continue;
--
2.49.0
Powered by blists - more mailing lists