[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <879c4426-4122-da9c-1a86-697f2c9a083@google.com>
Date: Thu, 3 Mar 2022 20:21:19 -0800 (PST)
From: Hugh Dickins <hughd@...gle.com>
To: Andrew Morton <akpm@...ux-foundation.org>
cc: Matthew Wilcox <willy@...radead.org>, linux-kernel@...r.kernel.org,
linux-mm@...ck.org
Subject: [PATCH mmotm] mm: filemap_unaccount_folio() large skip mapcount
fixup
The page_mapcount_reset() when folio_mapped() while mapping_exiting()
was devised long before there were huge or compound pages in the cache.
It is still valid for small pages, but not at all clear what's right to
check and reset on large pages. Just don't try when folio_test_large().
Signed-off-by: Hugh Dickins <hughd@...gle.com>
---
mm/filemap.c | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -152,25 +152,25 @@ static void filemap_unaccount_folio(struct address_space *mapping,
VM_BUG_ON_FOLIO(folio_mapped(folio), folio);
if (!IS_ENABLED(CONFIG_DEBUG_VM) && unlikely(folio_mapped(folio))) {
- int mapcount;
-
pr_alert("BUG: Bad page cache in process %s pfn:%05lx\n",
current->comm, folio_pfn(folio));
dump_page(&folio->page, "still mapped when deleted");
dump_stack();
add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
- mapcount = page_mapcount(&folio->page);
- if (mapping_exiting(mapping) &&
- folio_ref_count(folio) >= mapcount + 2) {
- /*
- * All vmas have already been torn down, so it's
- * a good bet that actually the folio is unmapped,
- * and we'd prefer not to leak it: if we're wrong,
- * some other bad page check should catch it later.
- */
- page_mapcount_reset(&folio->page);
- folio_ref_sub(folio, mapcount);
+ if (mapping_exiting(mapping) && !folio_test_large(folio)) {
+ int mapcount = page_mapcount(&folio->page);
+
+ if (folio_ref_count(folio) >= mapcount + 2) {
+ /*
+ * All vmas have already been torn down, so it's
+ * a good bet that actually the page is unmapped
+ * and we'd rather not leak it: if we're wrong,
+ * another bad page check should catch it later.
+ */
+ page_mapcount_reset(&folio->page);
+ folio_ref_sub(folio, mapcount);
+ }
}
}
Powered by blists - more mailing lists