[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250716171112.3666150-1-ziy@nvidia.com>
Date: Wed, 16 Jul 2025 13:11:12 -0400
From: Zi Yan <ziy@...dia.com>
To: David Hildenbrand <david@...hat.com>,
Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
Dan Carpenter <dan.carpenter@...aro.org>,
Antonio Quartulli <antonio@...delbit.com>,
linux-mm@...ck.org
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Hugh Dickins <hughd@...gle.com>,
Kirill Shutemov <k.shutemov@...il.com>,
Zi Yan <ziy@...dia.com>,
Baolin Wang <baolin.wang@...ux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@...cle.com>,
Nico Pache <npache@...hat.com>,
Ryan Roberts <ryan.roberts@....com>,
Dev Jain <dev.jain@....com>,
Barry Song <baohua@...nel.org>,
linux-kernel@...r.kernel.org
Subject: [PATCH] mm/huge_memory: refactor after-split (page) cache code.
Smatch/coverity checkers report NULL mapping referencing issues[1][2][3]
every time the code is modified, because they do not understand that
mapping cannot be NULL when a folio is in page cache in the code.
Refactor the code to make it explicit.
No functional change is intended.
[1]https://lore.kernel.org/linux-mm/2afe3d59-aca5-40f7-82a3-a6d976fb0f4f@stanley.mountain/
[2]https://lore.kernel.org/oe-kbuild/64b54034-f311-4e7d-b935-c16775dbb642@suswa.mountain/
[3]https://lore.kernel.org/linux-mm/20250716145804.4836-1-antonio@mandelbit.com/
Suggested-by: David Hildenbrand <david@...hat.com>
Signed-off-by: Zi Yan <ziy@...dia.com>
---
mm/huge_memory.c | 43 ++++++++++++++++++++++++++++---------------
1 file changed, 28 insertions(+), 15 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 31b5c4e61a57..fe17b0a157cd 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3804,6 +3804,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
*/
for (new_folio = folio_next(folio); new_folio != next_folio;
new_folio = next) {
+ unsigned long nr_pages = folio_nr_pages(new_folio);
+
next = folio_next(new_folio);
expected_refs = folio_expected_ref_count(new_folio) + 1;
@@ -3811,25 +3813,36 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
lru_add_split_folio(folio, new_folio, lruvec, list);
- /* Some pages can be beyond EOF: drop them from cache */
- if (new_folio->index >= end) {
- if (shmem_mapping(mapping))
- nr_shmem_dropped += folio_nr_pages(new_folio);
- else if (folio_test_clear_dirty(new_folio))
- folio_account_cleaned(
- new_folio,
- inode_to_wb(mapping->host));
- __filemap_remove_folio(new_folio, NULL);
- folio_put_refs(new_folio,
- folio_nr_pages(new_folio));
- } else if (mapping) {
- __xa_store(&mapping->i_pages, new_folio->index,
- new_folio, 0);
- } else if (swap_cache) {
+ /*
+ * Anonymous folio with swap cache.
+ * NOTE: shmem in swap cache is not supported yet.
+ */
+ if (swap_cache) {
__xa_store(&swap_cache->i_pages,
swap_cache_index(new_folio->swap),
new_folio, 0);
+ continue;
+ }
+
+ /* Anonymouse folio without swap cache */
+ if (!mapping)
+ continue;
+
+ /* Add the new folio to the page cache. */
+ if (new_folio->index < end) {
+ __xa_store(&mapping->i_pages, new_folio->index,
+ new_folio, 0);
+ continue;
}
+
+ /* Drop folio beyond EOF: ->index >= end */
+ if (shmem_mapping(mapping))
+ nr_shmem_dropped += nr_pages;
+ else if (folio_test_clear_dirty(new_folio))
+ folio_account_cleaned(
+ new_folio, inode_to_wb(mapping->host));
+ __filemap_remove_folio(new_folio, NULL);
+ folio_put_refs(new_folio, nr_pages);
}
/*
* Unfreeze @folio only after all page cache entries, which
--
2.47.2
Powered by blists - more mailing lists