[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20221209021525.196276-1-wangkefeng.wang@huawei.com>
Date: Fri, 9 Dec 2022 10:15:25 +0800
From: Kefeng Wang <wangkefeng.wang@...wei.com>
To: <naoya.horiguchi@....com>, <akpm@...ux-foundation.org>,
<linux-mm@...ck.org>
CC: <tony.luck@...el.com>, <linux-kernel@...r.kernel.org>,
<linmiaohe@...wei.com>, Kefeng Wang <wangkefeng.wang@...wei.com>
Subject: [PATCH -next resend] mm: hwposion: support recovery from ksm_might_need_to_copy()
When the kernel copy a page from ksm_might_need_to_copy(), but runs
into an uncorrectable error, it will crash since poisoned page is
consumed by kernel, this is similar to Copy-on-write poison recovery,
When an error is detected during the page copy, return VM_FAULT_HWPOISON,
which help us to avoid system crash. Note, memory failure on a KSM
page will be skipped, but still call memory_failure_queue() to be
consistent with general memory failure process.
Signed-off-by: Kefeng Wang <wangkefeng.wang@...wei.com>
---
mm/ksm.c | 8 ++++++--
mm/memory.c | 3 +++
mm/swapfile.c | 2 +-
3 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index dd02780c387f..83e2f74ae7da 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2629,8 +2629,12 @@ struct page *ksm_might_need_to_copy(struct page *page,
new_page = NULL;
}
if (new_page) {
- copy_user_highpage(new_page, page, address, vma);
-
+ if (copy_mc_user_highpage(new_page, page, address, vma)) {
+ put_page(new_page);
+ new_page = ERR_PTR(-EHWPOISON);
+ memory_failure_queue(page_to_pfn(page), 0);
+ return new_page;
+ }
SetPageDirty(new_page);
__SetPageUptodate(new_page);
__SetPageLocked(new_page);
diff --git a/mm/memory.c b/mm/memory.c
index aad226daf41b..8711488f5305 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3840,6 +3840,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
if (unlikely(!page)) {
ret = VM_FAULT_OOM;
goto out_page;
+ } els if (unlikely(PTR_ERR(page) == -EHWPOISON)) {
+ ret = VM_FAULT_HWPOISON;
+ goto out_page;
}
folio = page_folio(page);
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 908a529bca12..d479811bc311 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1767,7 +1767,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
swapcache = page;
page = ksm_might_need_to_copy(page, vma, addr);
- if (unlikely(!page))
+ if (IS_ERR_OR_NULL(page))
return -ENOMEM;
pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
--
2.35.3
Powered by blists - more mailing lists