[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250819134205.622806-10-npache@redhat.com>
Date: Tue, 19 Aug 2025 07:42:01 -0600
From: Nico Pache <npache@...hat.com>
To: linux-mm@...ck.org,
linux-doc@...r.kernel.org,
linux-kernel@...r.kernel.org,
linux-trace-kernel@...r.kernel.org
Cc: david@...hat.com,
ziy@...dia.com,
baolin.wang@...ux.alibaba.com,
lorenzo.stoakes@...cle.com,
Liam.Howlett@...cle.com,
ryan.roberts@....com,
dev.jain@....com,
corbet@....net,
rostedt@...dmis.org,
mhiramat@...nel.org,
mathieu.desnoyers@...icios.com,
akpm@...ux-foundation.org,
baohua@...nel.org,
willy@...radead.org,
peterx@...hat.com,
wangkefeng.wang@...wei.com,
usamaarif642@...il.com,
sunnanyong@...wei.com,
vishal.moola@...il.com,
thomas.hellstrom@...ux.intel.com,
yang@...amperecomputing.com,
kirill.shutemov@...ux.intel.com,
aarcange@...hat.com,
raquini@...hat.com,
anshuman.khandual@....com,
catalin.marinas@....com,
tiwai@...e.de,
will@...nel.org,
dave.hansen@...ux.intel.com,
jack@...e.cz,
cl@...two.org,
jglisse@...gle.com,
surenb@...gle.com,
zokeefe@...gle.com,
hannes@...xchg.org,
rientjes@...gle.com,
mhocko@...e.com,
rdunlap@...radead.org,
hughd@...gle.com
Subject: [PATCH v10 09/13] khugepaged: enable collapsing mTHPs even when PMD THPs are disabled
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
We have now allowed mTHP collapse, but thp_vma_allowable_order() still only
checks if the PMD-sized mTHP is allowed to collapse. This prevents scanning
and collapsing of 64K mTHP when only 64K mTHP is enabled. Thus, we should
modify the checks to allow all large orders of anonymous mTHP.
Acked-by: David Hildenbrand <david@...hat.com>
Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
Signed-off-by: Nico Pache <npache@...hat.com>
---
mm/khugepaged.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 7d9b5100bea1..2cadd07341de 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -491,7 +491,11 @@ void khugepaged_enter_vma(struct vm_area_struct *vma,
{
if (!mm_flags_test(MMF_VM_HUGEPAGE, vma->vm_mm) &&
hugepage_pmd_enabled()) {
- if (thp_vma_allowable_order(vma, vm_flags, TVA_KHUGEPAGED, PMD_ORDER))
+ unsigned long orders = vma_is_anonymous(vma) ?
+ THP_ORDERS_ALL_ANON : BIT(PMD_ORDER);
+
+ if (thp_vma_allowable_orders(vma, vm_flags, TVA_KHUGEPAGED,
+ orders))
__khugepaged_enter(vma->vm_mm);
}
}
@@ -2671,6 +2675,8 @@ static unsigned int collapse_scan_mm_slot(unsigned int pages, int *result,
vma_iter_init(&vmi, mm, khugepaged_scan.address);
for_each_vma(vmi, vma) {
+ unsigned long orders = vma_is_anonymous(vma) ?
+ THP_ORDERS_ALL_ANON : BIT(PMD_ORDER);
unsigned long hstart, hend;
cond_resched();
@@ -2678,7 +2684,8 @@ static unsigned int collapse_scan_mm_slot(unsigned int pages, int *result,
progress++;
break;
}
- if (!thp_vma_allowable_order(vma, vma->vm_flags, TVA_KHUGEPAGED, PMD_ORDER)) {
+ if (!thp_vma_allowable_orders(vma, vma->vm_flags,
+ TVA_KHUGEPAGED, orders)) {
skip:
progress++;
continue;
--
2.50.1
Powered by blists - more mailing lists