[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190708150531.760421168@linuxfoundation.org>
Date: Mon, 8 Jul 2019 17:13:35 +0200
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Catalin Marinas <catalin.marinas@....com>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Will Deacon <will@...nel.org>
Subject: [PATCH 4.9 102/102] arm64: kaslr: keep modules inside module region when KASAN is enabled
From: Ard Biesheuvel <ard.biesheuvel@...aro.org>
commit 6f496a555d93db7a11d4860b9220d904822f586a upstream.
When KASLR and KASAN are both enabled, we keep the modules where they
are, and randomize the placement of the kernel so it is within 2 GB
of the module region. The reason for this is that putting modules in
the vmalloc region (like we normally do when KASLR is enabled) is not
possible in this case, given that the entire vmalloc region is already
backed by KASAN zero shadow pages, and so allocating dedicated KASAN
shadow space as required by loaded modules is not possible.
The default module allocation window is set to [_etext - 128MB, _etext]
in kaslr.c, which is appropriate for KASLR kernels booted without a
seed or with 'nokaslr' on the command line. However, as it turns out,
it is not quite correct for the KASAN case, since it still intersects
the vmalloc region at the top, where attempts to allocate shadow pages
will collide with the KASAN zero shadow pages, causing a WARN() and all
kinds of other trouble. So cap the top end to MODULES_END explicitly
when running with KASAN.
Cc: <stable@...r.kernel.org> # 4.9+
Acked-by: Catalin Marinas <catalin.marinas@....com>
Tested-by: Catalin Marinas <catalin.marinas@....com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@...aro.org>
[will: backport to 4.9.y]
Signed-off-by: Will Deacon <will@...nel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/arm64/kernel/module.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
--- a/arch/arm64/kernel/module.c
+++ b/arch/arm64/kernel/module.c
@@ -33,10 +33,14 @@
void *module_alloc(unsigned long size)
{
void *p;
+ u64 module_alloc_end = module_alloc_base + MODULES_VSIZE;
+
+ if (IS_ENABLED(CONFIG_KASAN))
+ /* don't exceed the static module region - see below */
+ module_alloc_end = MODULES_END;
p = __vmalloc_node_range(size, MODULE_ALIGN, module_alloc_base,
- module_alloc_base + MODULES_VSIZE,
- GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
+ module_alloc_end, GFP_KERNEL, PAGE_KERNEL_EXEC, 0,
NUMA_NO_NODE, __builtin_return_address(0));
if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
Powered by blists - more mailing lists