[<prev] [next>] [day] [month] [year] [list]
Message-ID: <283a0bdfd6ac7aa334a491422bcae70919c572bd.1763008453.git.baolin.wang@linux.alibaba.com>
Date: Fri, 14 Nov 2025 08:46:32 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: akpm@...ux-foundation.org,
hughd@...gle.com
Cc: david@...hat.com,
lorenzo.stoakes@...cle.com,
willy@...radead.org,
baolin.wang@...ux.alibaba.com,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [PATCH] mm: shmem: allow fallback to smaller large orders for tmpfs mmap() access
After commit 69e0a3b49003 ("mm: shmem: fix the strategy for the tmpfs 'huge='
options"), we have fixed the large order allocation strategy for tmpfs, which
always tries PMD-sized large folios first, and if that fails, falls back to
smaller large folios. For tmpfs large folio allocation via mmap(), we should
maintain the same strategy as well. Let's unify the large order allocation
strategy for tmpfs.
There is no functional change for large folio allocation of anonymous shmem.
Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
---
mm/shmem.c | 17 +++--------------
1 file changed, 3 insertions(+), 14 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 395ca58ac4a5..fc835b3e4914 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -645,34 +645,23 @@ static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t index
* the mTHP interface, so we still use PMD-sized huge order to
* check whether global control is enabled.
*
- * For tmpfs mmap()'s huge order, we still use PMD-sized order to
- * allocate huge pages due to lack of a write size hint.
- *
* For tmpfs with 'huge=always' or 'huge=within_size' mount option,
* we will always try PMD-sized order first. If that failed, it will
* fall back to small large folios.
*/
switch (SHMEM_SB(inode->i_sb)->huge) {
case SHMEM_HUGE_ALWAYS:
- if (vma)
- return maybe_pmd_order;
-
return THP_ORDERS_ALL_FILE_DEFAULT;
case SHMEM_HUGE_WITHIN_SIZE:
- if (vma)
- within_size_orders = maybe_pmd_order;
- else
- within_size_orders = THP_ORDERS_ALL_FILE_DEFAULT;
-
- within_size_orders = shmem_get_orders_within_size(inode, within_size_orders,
- index, write_end);
+ within_size_orders = shmem_get_orders_within_size(inode,
+ THP_ORDERS_ALL_FILE_DEFAULT, index, write_end);
if (within_size_orders > 0)
return within_size_orders;
fallthrough;
case SHMEM_HUGE_ADVISE:
if (vm_flags & VM_HUGEPAGE)
- return maybe_pmd_order;
+ return THP_ORDERS_ALL_FILE_DEFAULT;
fallthrough;
default:
return 0;
--
2.43.7
Powered by blists - more mailing lists