[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210206083552.24394-5-lecopzer.chen@mediatek.com>
Date: Sat, 6 Feb 2021 16:35:51 +0800
From: Lecopzer Chen <lecopzer.chen@...iatek.com>
To: <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
<kasan-dev@...glegroups.com>,
<linux-arm-kernel@...ts.infradead.org>, <will@...nel.org>
CC: <dan.j.williams@...el.com>, <aryabinin@...tuozzo.com>,
<glider@...gle.com>, <dvyukov@...gle.com>,
<akpm@...ux-foundation.org>, <linux-mediatek@...ts.infradead.org>,
<yj.chiang@...iatek.com>, <catalin.marinas@....com>,
<ardb@...nel.org>, <andreyknvl@...gle.com>, <broonie@...nel.org>,
<linux@...ck-us.net>, <rppt@...nel.org>,
<tyhicks@...ux.microsoft.com>, <robin.murphy@....com>,
<vincenzo.frascino@....com>, <gustavoars@...nel.org>,
<lecopzer@...il.com>, Lecopzer Chen <lecopzer.chen@...iatek.com>
Subject: [PATCH v3 4/5] arm64: kaslr: support randomized module area with KASAN_VMALLOC
After KASAN_VMALLOC works in arm64, we can randomize module region
into vmalloc area now.
Test:
VMALLOC area ffffffc010000000 fffffffdf0000000
before the patch:
module_alloc_base/end ffffffc008b80000 ffffffc010000000
after the patch:
module_alloc_base/end ffffffdcf4bed000 ffffffc010000000
And the function that insmod some modules is fine.
Suggested-by: Ard Biesheuvel <ardb@...nel.org>
Signed-off-by: Lecopzer Chen <lecopzer.chen@...iatek.com>
---
arch/arm64/kernel/kaslr.c | 18 ++++++++++--------
arch/arm64/kernel/module.c | 16 +++++++++-------
2 files changed, 19 insertions(+), 15 deletions(-)
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 1c74c45b9494..a2858058e724 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -161,15 +161,17 @@ u64 __init kaslr_early_init(u64 dt_phys)
/* use the top 16 bits to randomize the linear region */
memstart_offset_seed = seed >> 48;
- if (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
- IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+ if (!IS_ENABLED(CONFIG_KASAN_VMALLOC) &&
+ (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
+ IS_ENABLED(CONFIG_KASAN_SW_TAGS)))
/*
- * KASAN does not expect the module region to intersect the
- * vmalloc region, since shadow memory is allocated for each
- * module at load time, whereas the vmalloc region is shadowed
- * by KASAN zero pages. So keep modules out of the vmalloc
- * region if KASAN is enabled, and put the kernel well within
- * 4 GB of the module region.
+ * KASAN without KASAN_VMALLOC does not expect the module region
+ * to intersect the vmalloc region, since shadow memory is
+ * allocated for each module at load time, whereas the vmalloc
+ * region is shadowed by KASAN zero pages. So keep modules
+ * out of the vmalloc region if KASAN is enabled without
+ * KASAN_VMALLOC, and put the kernel well within 4 GB of the
+ * module region.
*/
return offset % SZ_2G;
diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c
index fe21e0f06492..b5ec010c481f 100644
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -40,14 +40,16 @@ void *module_alloc(unsigned long size)
NUMA_NO_NODE, __builtin_return_address(0));
if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
- !IS_ENABLED(CONFIG_KASAN_GENERIC) &&
- !IS_ENABLED(CONFIG_KASAN_SW_TAGS))
+ (IS_ENABLED(CONFIG_KASAN_VMALLOC) ||
+ (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
+ !IS_ENABLED(CONFIG_KASAN_SW_TAGS))))
/*
- * KASAN can only deal with module allocations being served
- * from the reserved module region, since the remainder of
- * the vmalloc region is already backed by zero shadow pages,
- * and punching holes into it is non-trivial. Since the module
- * region is not randomized when KASAN is enabled, it is even
+ * KASAN without KASAN_VMALLOC can only deal with module
+ * allocations being served from the reserved module region,
+ * since the remainder of the vmalloc region is already
+ * backed by zero shadow pages, and punching holes into it
+ * is non-trivial. Since the module region is not randomized
+ * when KASAN is enabled without KASAN_VMALLOC, it is even
* less likely that the module region gets exhausted, so we
* can simply omit this fallback in that case.
*/
--
2.25.1
Powered by blists - more mailing lists