lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250121041849.3393237-1-jane.chu@oracle.com>
Date: Mon, 20 Jan 2025 21:18:49 -0700
From: Jane Chu <jane.chu@...cle.com>
To: akpm@...ux-foundation.org, willy@...radead.org, linmiaohe@...wei.com,
        kirill.shutemov@...ux.intel.com, hughd@...gle.com, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: [PATCH] mm: make page_mapped_in_vma() hugetlb walk aware

When a process consumes a UE in a page, the memory failure handler
attempts to collect information for a potential SIGBUS.
If the page is an anonymous page, page_mapped_in_vma(page, vma) is
invoked in order to
  1. retrieve the vaddr from the process' address space,
  2. verify that the vaddr is indeed mapped to the poisoned page,
where 'page' is the precise small page with UE.

It's been observed that when injecting poison to a non-head subpage
of an anonymous hugetlb page, no SIGBUS show up; while injecting to
the head page produces a SIGBUS. The casue is that, though hugetlb_walk()
returns a valid pmd entry (on x86), but check_pte() detects mismatch
between the head page per the pmd and the input subpage. Thus the vaddr
is considered not mapped to the subpage and the process is not collected
for SIGBUS purpose.  This is the calling stack
      collect_procs_anon
        page_mapped_in_vma
          page_vma_mapped_walk
            hugetlb_walk
              huge_pte_lock
                check_pte

It seems that the most obvious place to fix the issue is by making
page_mapped_in_vma() hugetlb walk aware. The precise subpage in the
input is useful in providing PAGE_SIZE granularity vaddr.

Signed-off-by: Jane Chu <jane.chu@...cle.com>
---
 mm/page_vma_mapped.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index 81839a9e74f1..bc036060cc68 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -342,15 +342,26 @@ unsigned long page_mapped_in_vma(const struct page *page,
 {
 	const struct folio *folio = page_folio(page);
 	struct page_vma_mapped_walk pvmw = {
-		.pfn = page_to_pfn(page),
 		.nr_pages = 1,
 		.vma = vma,
 		.flags = PVMW_SYNC,
 	};
 
+	/* fine granularity address is always preferred */
 	pvmw.address = vma_address(vma, page_pgoff(folio, page), 1);
 	if (pvmw.address == -EFAULT)
 		goto out;
+
+	/*
+	 * Hugetlb doesn't support partial page-mapping, hugetlb_walk()
+	 * simply assumes hugetlb pte, hence feed the headpage pfn for
+	 * the walk and pte check.
+	 */
+	if (folio_test_hugetlb(folio))
+		pvmw.pfn = folio_pfn(folio);
+	else
+		pvmw.pfn = page_to_pfn(page);
+
 	if (!page_vma_mapped_walk(&pvmw))
 		return -EFAULT;
 	page_vma_mapped_walk_done(&pvmw);
-- 
2.39.3


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ