[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230508014436.198717-5-tongtiangen@huawei.com>
Date: Mon, 8 May 2023 09:44:35 +0800
From: Tong Tiangen <tongtiangen@...wei.com>
To: Catalin Marinas <catalin.marinas@....com>,
Mark Rutland <mark.rutland@....com>,
James Morse <james.morse@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Robin Murphy <robin.murphy@....com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Will Deacon <will@...nel.org>,
Alexander Viro <viro@...iv.linux.org.uk>, <x86@...nel.org>,
"H . Peter Anvin" <hpa@...or.com>
CC: <linux-arm-kernel@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
Kefeng Wang <wangkefeng.wang@...wei.com>,
Guohanjun <guohanjun@...wei.com>,
Xie XiuQi <xiexiuqi@...wei.com>,
Tong Tiangen <tongtiangen@...wei.com>
Subject: [PATCH -next v9 4/5] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage()
If hardware errors are encountered during page copying, returning the bytes
not copied is not meaningful, and the caller cannot do any processing on
the remaining data. Returning -EFAULT is more reasonable, which represents
a hardware error encountered during the copying.
Signed-off-by: Tong Tiangen <tongtiangen@...wei.com>
---
include/linux/highmem.h | 8 ++++----
mm/khugepaged.c | 4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 4de1dbcd3ef6..c29f51ea8517 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -335,8 +335,8 @@ static inline void copy_highpage(struct page *to, struct page *from)
/*
* If architecture supports machine check exception handling, define the
* #MC versions of copy_user_highpage and copy_highpage. They copy a memory
- * page with #MC in source page (@from) handled, and return the number
- * of bytes not copied if there was a #MC, otherwise 0 for success.
+ * page with #MC in source page (@from) handled, and return -EFAULT if there
+ * was a #MC, otherwise 0 for success.
*/
static inline int copy_mc_user_highpage(struct page *to, struct page *from,
unsigned long vaddr, struct vm_area_struct *vma)
@@ -352,7 +352,7 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from,
kunmap_local(vto);
kunmap_local(vfrom);
- return ret;
+ return ret ? -EFAULT : 0;
}
static inline int copy_mc_highpage(struct page *to, struct page *from)
@@ -368,7 +368,7 @@ static inline int copy_mc_highpage(struct page *to, struct page *from)
kunmap_local(vto);
kunmap_local(vfrom);
- return ret;
+ return ret ? -EFAULT : 0;
}
#else
static inline int copy_mc_user_highpage(struct page *to, struct page *from,
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 6b9d39d65b73..ef8b70377292 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -805,7 +805,7 @@ static int __collapse_huge_page_copy(pte_t *pte,
continue;
}
src_page = pte_page(pteval);
- if (copy_mc_user_highpage(page, src_page, _address, vma) > 0) {
+ if (copy_mc_user_highpage(page, src_page, _address, vma)) {
result = SCAN_COPY_MC;
break;
}
@@ -2140,7 +2140,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
clear_highpage(hpage + (index % HPAGE_PMD_NR));
index++;
}
- if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), page) > 0) {
+ if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), page)) {
result = SCAN_COPY_MC;
goto rollback;
}
--
2.25.1
Powered by blists - more mailing lists