[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20230111101806.3236991-1-glider@google.com>
Date: Wed, 11 Jan 2023 11:18:06 +0100
From: Alexander Potapenko <glider@...gle.com>
To: glider@...gle.com
Cc: linux-kernel@...r.kernel.org, akpm@...ux-foundation.org,
peterz@...radead.org, mingo@...hat.com, elver@...gle.com,
dvyukov@...gle.com, linux-mm@...ck.org, kasan-dev@...glegroups.com,
luto@...nel.org, tglx@...utronix.de, x86@...nel.org,
Qun-Wei Lin <qun-wei.lin@...iatek.com>
Subject: [PATCH] Revert "x86: kmsan: sync metadata pages on page fault"
This reverts commit 3f1e2c7a9099c1ed32c67f12cdf432ba782cf51f.
As noticed by Qun-Wei Lin, arch_sync_kernel_mappings() in
arch/x86/mm/fault.c is only used with CONFIG_X86_32, whereas KMSAN is
only supported on x86_64, where this code is not compiled.
The patch in question dates back to downstream KMSAN branch based on
v5.8-rc5, it sneaked into upstream unnoticed in v6.1.
Reported-by: Qun-Wei Lin <qun-wei.lin@...iatek.com>
Link: https://github.com/google/kmsan/issues/91
Signed-off-by: Alexander Potapenko <glider@...gle.com>
---
arch/x86/mm/fault.c | 23 +----------------------
1 file changed, 1 insertion(+), 22 deletions(-)
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 7b0d4ab894c8b..a498ae1fbe665 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -260,7 +260,7 @@ static noinline int vmalloc_fault(unsigned long address)
}
NOKPROBE_SYMBOL(vmalloc_fault);
-static void __arch_sync_kernel_mappings(unsigned long start, unsigned long end)
+void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
{
unsigned long addr;
@@ -284,27 +284,6 @@ static void __arch_sync_kernel_mappings(unsigned long start, unsigned long end)
}
}
-void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
-{
- __arch_sync_kernel_mappings(start, end);
-#ifdef CONFIG_KMSAN
- /*
- * KMSAN maintains two additional metadata page mappings for the
- * [VMALLOC_START, VMALLOC_END) range. These mappings start at
- * KMSAN_VMALLOC_SHADOW_START and KMSAN_VMALLOC_ORIGIN_START and
- * have to be synced together with the vmalloc memory mapping.
- */
- if (start >= VMALLOC_START && end < VMALLOC_END) {
- __arch_sync_kernel_mappings(
- start - VMALLOC_START + KMSAN_VMALLOC_SHADOW_START,
- end - VMALLOC_START + KMSAN_VMALLOC_SHADOW_START);
- __arch_sync_kernel_mappings(
- start - VMALLOC_START + KMSAN_VMALLOC_ORIGIN_START,
- end - VMALLOC_START + KMSAN_VMALLOC_ORIGIN_START);
- }
-#endif
-}
-
static bool low_pfn(unsigned long pfn)
{
return pfn < max_low_pfn;
--
2.39.0.314.g84b9a713c41-goog
Powered by blists - more mailing lists