[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <117121665254442c3c7f585248296495e5e2b45c.1722404078.git.baolin.wang@linux.alibaba.com>
Date: Wed, 31 Jul 2024 13:46:19 +0800
From: Baolin Wang <baolin.wang@...ux.alibaba.com>
To: akpm@...ux-foundation.org,
hughd@...gle.com
Cc: willy@...radead.org,
david@...hat.com,
21cnbao@...il.com,
ryan.roberts@....com,
ziy@...dia.com,
gshan@...hat.com,
ioworker0@...il.com,
baolin.wang@...ux.alibaba.com,
linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: [PATCH 1/2] mm: shmem: avoid allocating huge pages larger than MAX_PAGECACHE_ORDER for shmem
Similar to commit d659b715e94ac ("mm/huge_memory: avoid PMD-size page cache
if needed"), ARM64 can support 512MB PMD-sized THP when the base page size is
64KB, which is larger than the maximum supported page cache size MAX_PAGECACHE_ORDER.
This is not expected. To fix this issue, use THP_ORDERS_ALL_FILE_DEFAULT for
shmem to filter allowable huge orders.
Fixes: e7a2ab7b3bb5 ("mm: shmem: add mTHP support for anonymous shmem")
Signed-off-by: Baolin Wang <baolin.wang@...ux.alibaba.com>
---
mm/shmem.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 2faa9daaf54b..a4332a97558c 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1630,10 +1630,10 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
unsigned long vm_flags = vma->vm_flags;
/*
- * Check all the (large) orders below HPAGE_PMD_ORDER + 1 that
+ * Check all the (large) orders below MAX_PAGECACHE_ORDER + 1 that
* are enabled for this vma.
*/
- unsigned long orders = BIT(PMD_ORDER + 1) - 1;
+ unsigned long orders = THP_ORDERS_ALL_FILE_DEFAULT;
loff_t i_size;
int order;
--
2.39.3
Powered by blists - more mailing lists