[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250106165513.104899-4-ziy@nvidia.com>
Date: Mon, 6 Jan 2025 11:55:06 -0500
From: Zi Yan <ziy@...dia.com>
To: linux-mm@...ck.org,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>
Cc: Ryan Roberts <ryan.roberts@....com>,
Hugh Dickins <hughd@...gle.com>,
David Hildenbrand <david@...hat.com>,
Yang Shi <yang@...amperecomputing.com>,
Miaohe Lin <linmiaohe@...wei.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>,
Yu Zhao <yuzhao@...gle.com>,
John Hubbard <jhubbard@...dia.com>,
linux-kernel@...r.kernel.org,
Zi Yan <ziy@...dia.com>
Subject: [PATCH v4 03/10] mm/huge_memory: allow split shmem large folio to any order
Commit 4d684b5f92ba ("mm: shmem: add large folio support for tmpfs") has
added large folio support to shmem. Remove the restriction in
split_huge_page*().
Signed-off-by: Zi Yan <ziy@...dia.com>
---
mm/huge_memory.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c89aed1510f1..511b5b23894b 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3287,7 +3287,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
/* Some pages can be beyond EOF: drop them from page cache */
if (tail->index >= end) {
if (shmem_mapping(folio->mapping))
- nr_dropped++;
+ nr_dropped += new_nr;
else if (folio_test_clear_dirty(tail))
folio_account_cleaned(tail,
inode_to_wb(folio->mapping->host));
@@ -3453,12 +3453,6 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
return -EINVAL;
}
} else if (new_order) {
- /* Split shmem folio to non-zero order not supported */
- if (shmem_mapping(folio->mapping)) {
- VM_WARN_ONCE(1,
- "Cannot split shmem folio to non-0 order");
- return -EINVAL;
- }
/*
* No split if the file system does not support large folio.
* Note that we might still have THPs in such mappings due to
--
2.45.2
Powered by blists - more mailing lists