[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240204082627.3892816-3-tongtiangen@huawei.com>
Date: Sun, 4 Feb 2024 16:26:26 +0800
From: Tong Tiangen <tongtiangen@...wei.com>
To: Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, <wangkefeng.wang@...wei.com>, Dave Hansen
<dave.hansen@...ux.intel.com>, <x86@...nel.org>, "H. Peter Anvin"
<hpa@...or.com>, Tony Luck <tony.luck@...el.com>, Andy Lutomirski
<luto@...nel.org>, Peter Zijlstra <peterz@...radead.org>, Andrew Morton
<akpm@...ux-foundation.org>, Naoya Horiguchi <naoya.horiguchi@....com>
CC: <linux-kernel@...r.kernel.org>, <linux-edac@...r.kernel.org>,
<linux-mm@...ck.org>, Tong Tiangen <tongtiangen@...wei.com>, Guohanjun
<guohanjun@...wei.com>
Subject: [PATCH -next v5 2/3] x86/mce: set MCE_IN_KERNEL_COPYIN for DEFAULT_MCE_SAFE exception
From: Kefeng Wang <wangkefeng.wang@...wei.com>
Currently, some kernel memory copy scenarios[1][2][3] which use
copy_mc_{user_}highpage() to safely abort copy and report 'bytes not copied'
when accessing the poisoned source page, a recoverable synchronous exception
generated in the processing and the fixup type EX_TYPE_DEFAULT_MCE_SAFE is
used to distinguish from other exceptions, and an asynchronous
memory_failure_queue() is called to handle memory failure of the source page
, but scheduling someone else to handle it at some future point is
unpredictable and risky.
The better way is immediately deal with it during current context,
fortunately, there is already a framework to synchronously call
memory_failure(), see kill_me_never() in do_machine_check(), a task work is
triggered once MCE_IN_KERNEL_COPYIN is set, in order to fix above issue,
setting MCE_IN_KERNEL_COPYIN for EX_TYPE_DEFAULT_MCE_SAFE case too.
[1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline")
[2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults")
[3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")
Reviewed-by: Naoya Horiguchi <naoya.horiguchi@....com>
Reviewed-by: Tony Luck <tony.luck@...el.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@...wei.com>
Signed-off-by: Tong Tiangen <tongtiangen@...wei.com>
---
arch/x86/kernel/cpu/mce/severity.c | 4 ++--
mm/ksm.c | 1 -
mm/memory.c | 13 ++++---------
3 files changed, 6 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kernel/cpu/mce/severity.c b/arch/x86/kernel/cpu/mce/severity.c
index bca780fa5e57..b2cce1b6c96d 100644
--- a/arch/x86/kernel/cpu/mce/severity.c
+++ b/arch/x86/kernel/cpu/mce/severity.c
@@ -292,11 +292,11 @@ static noinstr int error_context(struct mce *m, struct pt_regs *regs)
case EX_TYPE_UACCESS:
if (!copy_user)
return IN_KERNEL;
+ fallthrough;
+ case EX_TYPE_DEFAULT_MCE_SAFE:
m->kflags |= MCE_IN_KERNEL_COPYIN;
fallthrough;
-
case EX_TYPE_FAULT_MCE_SAFE:
- case EX_TYPE_DEFAULT_MCE_SAFE:
m->kflags |= MCE_IN_KERNEL_RECOV;
return IN_KERNEL_RECOV;
diff --git a/mm/ksm.c b/mm/ksm.c
index 8c001819cf10..ba9d324ea1c6 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -3084,7 +3084,6 @@ struct folio *ksm_might_need_to_copy(struct folio *folio,
if (copy_mc_user_highpage(folio_page(new_folio, 0), page,
addr, vma)) {
folio_put(new_folio);
- memory_failure_queue(folio_pfn(folio), 0);
return ERR_PTR(-EHWPOISON);
}
folio_set_dirty(new_folio);
diff --git a/mm/memory.c b/mm/memory.c
index 8d14ba440929..ee06a8f766ab 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2846,10 +2846,8 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src,
unsigned long addr = vmf->address;
if (likely(src)) {
- if (copy_mc_user_highpage(dst, src, addr, vma)) {
- memory_failure_queue(page_to_pfn(src), 0);
+ if (copy_mc_user_highpage(dst, src, addr, vma))
return -EHWPOISON;
- }
return 0;
}
@@ -6179,10 +6177,8 @@ static int copy_user_gigantic_page(struct folio *dst, struct folio *src,
cond_resched();
if (copy_mc_user_highpage(dst_page, src_page,
- addr + i*PAGE_SIZE, vma)) {
- memory_failure_queue(page_to_pfn(src_page), 0);
+ addr + i*PAGE_SIZE, vma))
return -EHWPOISON;
- }
}
return 0;
}
@@ -6199,10 +6195,9 @@ static int copy_subpage(unsigned long addr, int idx, void *arg)
struct page *dst = nth_page(copy_arg->dst, idx);
struct page *src = nth_page(copy_arg->src, idx);
- if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma)) {
- memory_failure_queue(page_to_pfn(src), 0);
+ if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma))
return -EHWPOISON;
- }
+
return 0;
}
--
2.25.1
Powered by blists - more mailing lists