lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 5 Dec 2018 13:24:17 +0100
From:   Daniel Borkmann <daniel@...earbox.net>
To:     Ard Biesheuvel <ard.biesheuvel@...aro.org>,
        Will Deacon <will.deacon@....com>
Cc:     Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Alexei Starovoitov <ast@...nel.org>,
        Rick Edgecombe <rick.p.edgecombe@...el.com>,
        Eric Dumazet <eric.dumazet@...il.com>,
        Jann Horn <jannh@...gle.com>,
        Kees Cook <keescook@...omium.org>,
        Jessica Yu <jeyu@...nel.org>, Arnd Bergmann <arnd@...db.de>,
        Catalin Marinas <catalin.marinas@....com>,
        Mark Rutland <mark.rutland@....com>,
        "David S. Miller" <davem@...emloft.net>,
        linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>,
        "<netdev@...r.kernel.org>" <netdev@...r.kernel.org>
Subject: Re: [PATCH v4 2/2] arm64/bpf: don't allocate BPF JIT programs in
 module memory

Hi Will,

On 12/04/2018 04:45 PM, Ard Biesheuvel wrote:
> On Mon, 3 Dec 2018 at 13:49, Will Deacon <will.deacon@....com> wrote:
>> On Fri, Nov 30, 2018 at 08:20:06PM +0100, Ard Biesheuvel wrote:
>>> On Fri, 30 Nov 2018 at 19:26, Will Deacon <will.deacon@....com> wrote:
>>>> On Fri, Nov 23, 2018 at 11:18:04PM +0100, Ard Biesheuvel wrote:
>>>>> The arm64 module region is a 128 MB region that is kept close to
>>>>> the core kernel, in order to ensure that relative branches are
>>>>> always in range. So using the same region for programs that do
>>>>> not have this restriction is wasteful, and preferably avoided.
>>>>>
>>>>> Now that the core BPF JIT code permits the alloc/free routines to
>>>>> be overridden, implement them by vmalloc()/vfree() calls from a
>>>>> dedicated 128 MB region set aside for BPF programs. This ensures
>>>>> that BPF programs are still in branching range of each other, which
>>>>> is something the JIT currently depends upon (and is not guaranteed
>>>>> when using module_alloc() on KASLR kernels like we do currently).
>>>>> It also ensures that placement of BPF programs does not correlate
>>>>> with the placement of the core kernel or modules, making it less
>>>>> likely that leaking the former will reveal the latter.
>>>>>
>>>>> This also solves an issue under KASAN, where shadow memory is
>>>>> needlessly allocated for all BPF programs (which don't require KASAN
>>>>> shadow pages since they are not KASAN instrumented)
>>>>>
>>>>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@...aro.org>
>>>>> ---
>>>>>  arch/arm64/include/asm/memory.h |  5 ++++-
>>>>>  arch/arm64/net/bpf_jit_comp.c   | 13 +++++++++++++
>>>>>  2 files changed, 17 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
>>>>> index b96442960aea..ee20fc63899c 100644
>>>>> --- a/arch/arm64/include/asm/memory.h
>>>>> +++ b/arch/arm64/include/asm/memory.h
>>>>> @@ -62,8 +62,11 @@
>>>>>  #define PAGE_OFFSET          (UL(0xffffffffffffffff) - \
>>>>>       (UL(1) << (VA_BITS - 1)) + 1)
>>>>>  #define KIMAGE_VADDR         (MODULES_END)
>>>>> +#define BPF_JIT_REGION_START (VA_START + KASAN_SHADOW_SIZE)
>>>>> +#define BPF_JIT_REGION_SIZE  (SZ_128M)
>>>>> +#define BPF_JIT_REGION_END   (BPF_JIT_REGION_START + BPF_JIT_REGION_SIZE)
>>>>>  #define MODULES_END          (MODULES_VADDR + MODULES_VSIZE)
>>>>> -#define MODULES_VADDR                (VA_START + KASAN_SHADOW_SIZE)
>>>>> +#define MODULES_VADDR                (BPF_JIT_REGION_END)
>>>>>  #define MODULES_VSIZE                (SZ_128M)
>>>>>  #define VMEMMAP_START                (PAGE_OFFSET - VMEMMAP_SIZE)
>>>>>  #define PCI_IO_END           (VMEMMAP_START - SZ_2M)
>>>>> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
>>>>> index a6fdaea07c63..76c2ab40c02d 100644
>>>>> --- a/arch/arm64/net/bpf_jit_comp.c
>>>>> +++ b/arch/arm64/net/bpf_jit_comp.c
>>>>> @@ -940,3 +940,16 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
>>>>>                                          tmp : orig_prog);
>>>>>       return prog;
>>>>>  }
>>>>> +
>>>>> +void *bpf_jit_alloc_exec(unsigned long size)
>>>>> +{
>>>>> +     return __vmalloc_node_range(size, PAGE_SIZE, BPF_JIT_REGION_START,
>>>>> +                                 BPF_JIT_REGION_END, GFP_KERNEL,
>>>>> +                                 PAGE_KERNEL_EXEC, 0, NUMA_NO_NODE,
>>>>> +                                 __builtin_return_address(0));
>>>>
>>>> I guess we'll want VM_IMMEDIATE_UNMAP here if Rich gets that merged.
>>>
>>> I think akpm already queued up that patch.
>>>
>>>> In the
>>>> meantime, I wonder if it's worth zeroing the region in bpf_jit_free_exec()?
>>>> (although we'd need the size information...).
>>>>
>>>
>>> Not sure. What exactly would that achieve?
>>
>> I think the zero encoding is guaranteed to be undefined, so it would limit
>> the usefulness of any stale, executable TLB entries. However, we'd also need
>> cache maintenance to make that stuff visible to the I side, so it's probably
>> not worth it, especially if akpm has queued the stuff from Rich.
>>
>> Maybe just add an:
>>
>> /* FIXME: Remove this when VM_IMMEDIATE_UNMAP is supported */
>> #ifndef VM_IMMEDIATE_UNMAP
>> #define VM_IMMEDIATE_UNMAP      0
>> #endif
>>
>> so we remember to come back and sort this out? Up to you.
> 
> I'll just make a note to send out that patch once the definition lands via -akpm

Could I get an ACK from you for this patch, then I'd take the series into bpf-next.

Thanks,
Daniel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ