[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b3588c1a6976cf8d00fe39f5cb8918b42e4d4c3c.1765833318.git.luizcap@redhat.com>
Date: Mon, 15 Dec 2025 16:16:52 -0500
From: Luiz Capitulino <luizcap@...hat.com>
To: linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
david@...nel.org
Cc: ryan.roberts@....com,
akpm@...ux-foundation.org,
lorenzo.stoakes@...cle.com
Subject: [PATCH 10/11] mm/thp: always enable mTHP support
If PMD-sized pages are not supported on an architecture (ie. the
arch implements arch_has_pmd_leaves() and it returns false) then the
current code disables all THP, including mTHP.
This commit fixes this by allowing mTHP to be always enabled for all
archs. When PMD-sized pages are not supported, its sysfs entry won't be
created and their mapping will be disallowed at page-fault time.
Signed-off-by: Luiz Capitulino <luizcap@...hat.com>
---
mm/huge_memory.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 1e5ea2e47f79..882331592928 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -115,6 +115,9 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
else
supported_orders = THP_ORDERS_ALL_FILE_DEFAULT;
+ if (!pgtable_has_pmd_leaves())
+ supported_orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));
+
orders &= supported_orders;
if (!orders)
return 0;
@@ -122,7 +125,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
if (!vma->vm_mm) /* vdso */
return 0;
- if (!pgtable_has_pmd_leaves() || vma_thp_disabled(vma, vm_flags, forced_collapse))
+ if (vma_thp_disabled(vma, vm_flags, forced_collapse))
return 0;
/* khugepaged doesn't collapse DAX vma, but page fault is fine. */
@@ -806,6 +809,9 @@ static int __init hugepage_init_sysfs(struct kobject **hugepage_kobj)
}
orders = THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_FILE_DEFAULT;
+ if (!pgtable_has_pmd_leaves())
+ orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));
+
order = highest_order(orders);
while (orders) {
thpsize = thpsize_create(order, *hugepage_kobj);
@@ -905,9 +911,6 @@ static int __init hugepage_init(void)
int err;
struct kobject *hugepage_kobj;
- if (!pgtable_has_pmd_leaves())
- return -EINVAL;
-
/*
* hugepages can't be allocated by the buddy allocator
*/
--
2.52.0
Powered by blists - more mailing lists