[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210914183718.4236-2-shy828301@gmail.com>
Date: Tue, 14 Sep 2021 11:37:15 -0700
From: Yang Shi <shy828301@...il.com>
To: naoya.horiguchi@....com, hughd@...gle.com,
kirill.shutemov@...ux.intel.com, willy@...radead.org,
osalvador@...e.de, akpm@...ux-foundation.org
Cc: shy828301@...il.com, linux-mm@...ck.org,
linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PATCH 1/4] mm: filemap: check if any subpage is hwpoisoned for PMD page fault
When handling shmem page fault the THP with corrupted subpage could be PMD
mapped if certain conditions are satisfied. But kernel is supposed to
send SIGBUS when trying to map hwpoisoned page.
There are two paths which may do PMD map: fault around and regular fault.
Before commit f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault()
codepaths") the thing was even worse in fault around path. The THP could be
PMD mapped as long as the VMA fits regardless what subpage is accessed and
corrupted. After this commit as long as head page is not corrupted the THP
could be PMD mapped.
In the regulat fault path the THP could be PMD mapped as long as the corrupted
page is not accessed and the VMA fits.
Fix the loophole by iterating all subpage to check hwpoisoned one when doing
PMD map, if any is found just fallback to PTE map. Such THP just can be PTE
mapped. Do the check in the icache flush loop in order to avoid iterating
all subpages twice and icache flush is actually noop for most architectures.
Cc: <stable@...r.kernel.org>
Signed-off-by: Yang Shi <shy828301@...il.com>
---
mm/filemap.c | 15 +++++++++------
mm/memory.c | 11 ++++++++++-
2 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index dae481293b5d..740b7afe159a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3195,12 +3195,14 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page)
}
if (pmd_none(*vmf->pmd) && PageTransHuge(page)) {
- vm_fault_t ret = do_set_pmd(vmf, page);
- if (!ret) {
- /* The page is mapped successfully, reference consumed. */
- unlock_page(page);
- return true;
- }
+ vm_fault_t ret = do_set_pmd(vmf, page);
+ if (ret == VM_FAULT_FALLBACK)
+ goto out;
+ if (!ret) {
+ /* The page is mapped successfully, reference consumed. */
+ unlock_page(page);
+ return true;
+ }
}
if (pmd_none(*vmf->pmd)) {
@@ -3220,6 +3222,7 @@ static bool filemap_map_pmd(struct vm_fault *vmf, struct page *page)
return true;
}
+out:
return false;
}
diff --git a/mm/memory.c b/mm/memory.c
index 25fc46e87214..1765bf72ed16 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3920,8 +3920,17 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
if (unlikely(!pmd_none(*vmf->pmd)))
goto out;
- for (i = 0; i < HPAGE_PMD_NR; i++)
+ for (i = 0; i < HPAGE_PMD_NR; i++) {
+ /*
+ * Just backoff if any subpage of a THP is corrupted otherwise
+ * the corrupted page may mapped by PMD silently to escape the
+ * check. This kind of THP just can be PTE mapped. Access to
+ * the corrupted subpage should trigger SIGBUS as expected.
+ */
+ if (PageHWPoison(page + i))
+ goto out;
flush_icache_page(vma, page + i);
+ }
entry = mk_huge_pmd(page, vma->vm_page_prot);
if (write)
--
2.26.2
Powered by blists - more mailing lists