[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220209191248.771916077@linuxfoundation.org>
Date: Wed, 9 Feb 2022 20:13:40 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, luofei <luofei@...cloud.com>
Subject: [PATCH 4.14 3/3] x86/mm, mm/hwpoison: Fix the unmap kernel 1:1 pages check condition
From: luofei <luofei@...cloud.com>
When fd0e786d9d09 ("x86/mm, mm/hwpoison: Don't unconditionally unmap
kernel 1:1 pages") was backported to 4.14.y, the logic was reversed when
calling memory_failure() to determine whether it needs to unmap the
kernel page. Only when memory_failure() returns successfully, the kernel
page can be unmapped.
Signed-off-by: luofei <luofei@...cloud.com>
Cc: stable@...r.kernel.org #v4.14.x
Cc: stable@...r.kernel.org #v4.15.x
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/x86/kernel/cpu/mcheck/mce.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/kernel/cpu/mcheck/mce.c
+++ b/arch/x86/kernel/cpu/mcheck/mce.c
@@ -589,7 +589,7 @@ static int srao_decode_notifier(struct n
if (mce_usable_address(mce) && (mce->severity == MCE_AO_SEVERITY)) {
pfn = mce->addr >> PAGE_SHIFT;
- if (memory_failure(pfn, MCE_VECTOR, 0))
+ if (!memory_failure(pfn, MCE_VECTOR, 0))
mce_unmap_kpfn(pfn);
}
Powered by blists - more mailing lists