[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20181130182629.GA16085@arm.com>
Date: Fri, 30 Nov 2018 18:26:29 +0000
From: Will Deacon <will.deacon@....com>
To: Ard Biesheuvel <ard.biesheuvel@...aro.org>
Cc: linux-kernel@...r.kernel.org,
Daniel Borkmann <daniel@...earbox.net>,
Alexei Starovoitov <ast@...nel.org>,
Rick Edgecombe <rick.p.edgecombe@...el.com>,
Eric Dumazet <eric.dumazet@...il.com>,
Jann Horn <jannh@...gle.com>,
Kees Cook <keescook@...omium.org>,
Jessica Yu <jeyu@...nel.org>, Arnd Bergmann <arnd@...db.de>,
Catalin Marinas <catalin.marinas@....com>,
Mark Rutland <mark.rutland@....com>,
"David S. Miller" <davem@...emloft.net>,
linux-arm-kernel@...ts.infradead.org, netdev@...r.kernel.org
Subject: Re: [PATCH v4 2/2] arm64/bpf: don't allocate BPF JIT programs in
module memory
On Fri, Nov 23, 2018 at 11:18:04PM +0100, Ard Biesheuvel wrote:
> The arm64 module region is a 128 MB region that is kept close to
> the core kernel, in order to ensure that relative branches are
> always in range. So using the same region for programs that do
> not have this restriction is wasteful, and preferably avoided.
>
> Now that the core BPF JIT code permits the alloc/free routines to
> be overridden, implement them by vmalloc()/vfree() calls from a
> dedicated 128 MB region set aside for BPF programs. This ensures
> that BPF programs are still in branching range of each other, which
> is something the JIT currently depends upon (and is not guaranteed
> when using module_alloc() on KASLR kernels like we do currently).
> It also ensures that placement of BPF programs does not correlate
> with the placement of the core kernel or modules, making it less
> likely that leaking the former will reveal the latter.
>
> This also solves an issue under KASAN, where shadow memory is
> needlessly allocated for all BPF programs (which don't require KASAN
> shadow pages since they are not KASAN instrumented)
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@...aro.org>
> ---
> arch/arm64/include/asm/memory.h | 5 ++++-
> arch/arm64/net/bpf_jit_comp.c | 13 +++++++++++++
> 2 files changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index b96442960aea..ee20fc63899c 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -62,8 +62,11 @@
> #define PAGE_OFFSET (UL(0xffffffffffffffff) - \
> (UL(1) << (VA_BITS - 1)) + 1)
> #define KIMAGE_VADDR (MODULES_END)
> +#define BPF_JIT_REGION_START (VA_START + KASAN_SHADOW_SIZE)
> +#define BPF_JIT_REGION_SIZE (SZ_128M)
> +#define BPF_JIT_REGION_END (BPF_JIT_REGION_START + BPF_JIT_REGION_SIZE)
> #define MODULES_END (MODULES_VADDR + MODULES_VSIZE)
> -#define MODULES_VADDR (VA_START + KASAN_SHADOW_SIZE)
> +#define MODULES_VADDR (BPF_JIT_REGION_END)
> #define MODULES_VSIZE (SZ_128M)
> #define VMEMMAP_START (PAGE_OFFSET - VMEMMAP_SIZE)
> #define PCI_IO_END (VMEMMAP_START - SZ_2M)
> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
> index a6fdaea07c63..76c2ab40c02d 100644
> --- a/arch/arm64/net/bpf_jit_comp.c
> +++ b/arch/arm64/net/bpf_jit_comp.c
> @@ -940,3 +940,16 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> tmp : orig_prog);
> return prog;
> }
> +
> +void *bpf_jit_alloc_exec(unsigned long size)
> +{
> + return __vmalloc_node_range(size, PAGE_SIZE, BPF_JIT_REGION_START,
> + BPF_JIT_REGION_END, GFP_KERNEL,
> + PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
> + __builtin_return_address(0));
I guess we'll want VM_IMMEDIATE_UNMAP here if Rich gets that merged. In the
meantime, I wonder if it's worth zeroing the region in bpf_jit_free_exec()?
(although we'd need the size information...).
Will
Powered by blists - more mailing lists