[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20251016033643.10848-1-lance.yang@linux.dev>
Date: Thu, 16 Oct 2025 11:36:43 +0800
From: Lance Yang <lance.yang@...ux.dev>
To: akpm@...ux-foundation.org,
david@...hat.com,
lorenzo.stoakes@...cle.com
Cc: ziy@...dia.com,
baolin.wang@...ux.alibaba.com,
Liam.Howlett@...cle.com,
npache@...hat.com,
ryan.roberts@....com,
dev.jain@....com,
baohua@...nel.org,
ioworker0@...il.com,
linux-kernel@...r.kernel.org,
linux-mm@...ck.org,
Lance Yang <lance.yang@...ux.dev>
Subject: [PATCH mm-new 1/1] mm/khugepaged: guard is_zero_pfn() calls with pte_present()
From: Lance Yang <lance.yang@...ux.dev>
A non-present entry, like a swap PTE, contains completely different data
(swap type and offset). pte_pfn() doesn't know this, so if we feed it a
non-present entry, it will spit out a junk PFN.
What if that junk PFN happens to match the zeropage's PFN by sheer
chance? While really unlikely, this would be really bad if it did.
So, let's fix this potential bug by ensuring all calls to is_zero_pfn()
in khugepaged.c are properly guarded by a pte_present() check.
Suggested-by: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>
Signed-off-by: Lance Yang <lance.yang@...ux.dev>
---
mm/khugepaged.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index d635d821f611..0341c3d13e9e 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -516,7 +516,7 @@ static void release_pte_pages(pte_t *pte, pte_t *_pte,
pte_t pteval = ptep_get(_pte);
unsigned long pfn;
- if (pte_none(pteval))
+ if (!pte_present(pteval))
continue;
pfn = pte_pfn(pteval);
if (is_zero_pfn(pfn))
@@ -690,9 +690,10 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
address += nr_ptes * PAGE_SIZE) {
nr_ptes = 1;
pteval = ptep_get(_pte);
- if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
+ if (pte_none(pteval) ||
+ (pte_present(pteval) && is_zero_pfn(pte_pfn(pteval)))) {
add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
- if (is_zero_pfn(pte_pfn(pteval))) {
+ if (!pte_none(pteval)) {
/*
* ptl mostly unnecessary.
*/
@@ -794,7 +795,8 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
unsigned long src_addr = address + i * PAGE_SIZE;
struct page *src_page;
- if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
+ if (pte_none(pteval) ||
+ (pte_present(pteval) && is_zero_pfn(pte_pfn(pteval)))) {
clear_user_highpage(page, src_addr);
continue;
}
@@ -1294,7 +1296,8 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
goto out_unmap;
}
}
- if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
+ if (pte_none(pteval) ||
+ (pte_present(pteval) && is_zero_pfn(pte_pfn(pteval)))) {
++none_or_zero;
if (!userfaultfd_armed(vma) &&
(!cc->is_khugepaged ||
--
2.49.0
Powered by blists - more mailing lists