[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250522093452.6379-1-shivankg@amd.com>
Date: Thu, 22 May 2025 09:34:53 +0000
From: Shivank Garg <shivankg@....com>
To: <akpm@...ux-foundation.org>, <david@...hat.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>
CC: <ziy@...dia.com>, <baolin.wang@...ux.alibaba.com>,
<lorenzo.stoakes@...cle.com>, <Liam.Howlett@...cle.com>, <npache@...hat.com>,
<ryan.roberts@....com>, <dev.jain@....com>, <fengwei.yin@...el.com>,
<shivankg@....com>, <bharata@....com>,
<syzbot+2b99589e33edbe9475ca@...kaller.appspotmail.com>
Subject: [PATCH] mm/khugepaged: Fix race with folio splitting in hpage_collapse_scan_file()
folio_mapcount() checks folio_test_large() before proceeding to
folio_large_mapcount(), but there exists a race window where a folio
could be split between these checks which triggered the
VM_WARN_ON_FOLIO(!folio_test_large(folio), folio) in
folio_large_mapcount().
Take a temporary folio reference in hpage_collapse_scan_file() to prevent
races with concurrent folio splitting/freeing. This prevent potential
incorrect large folio detection.
Reported-by: syzbot+2b99589e33edbe9475ca@...kaller.appspotmail.com
Closes: https://lore.kernel.org/all/6828470d.a70a0220.38f255.000c.GAE@google.com
Fixes: 05c5323b2a34 ("mm: track mapcount of large folios in single value")
Suggested-by: David Hildenbrand <david@...hat.com>
Signed-off-by: Shivank Garg <shivankg@....com>
---
mm/khugepaged.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index cc945c6ab3bd..6e8902f9d88c 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2295,6 +2295,17 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
continue;
}
+ if (!folio_try_get(folio)) {
+ xas_reset(&xas);
+ continue;
+ }
+
+ if (unlikely(folio != xas_reload(&xas))) {
+ folio_put(folio);
+ xas_reset(&xas);
+ continue;
+ }
+
if (folio_order(folio) == HPAGE_PMD_ORDER &&
folio->index == start) {
/* Maybe PMD-mapped */
@@ -2305,23 +2316,27 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
* it's safe to skip LRU and refcount checks before
* returning.
*/
+ folio_put(folio);
break;
}
node = folio_nid(folio);
if (hpage_collapse_scan_abort(node, cc)) {
result = SCAN_SCAN_ABORT;
+ folio_put(folio);
break;
}
cc->node_load[node]++;
if (!folio_test_lru(folio)) {
result = SCAN_PAGE_LRU;
+ folio_put(folio);
break;
}
if (!is_refcount_suitable(folio)) {
result = SCAN_PAGE_COUNT;
+ folio_put(folio);
break;
}
@@ -2333,6 +2348,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
*/
present += folio_nr_pages(folio);
+ folio_put(folio);
if (need_resched()) {
xas_pause(&xas);
--
2.34.1
Powered by blists - more mailing lists