lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Fri, 23 Feb 2018 14:37:16 -0800
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Daniel Borkmann <daniel@...earbox.net>
Cc:     ast@...nel.org, netdev@...r.kernel.org
Subject: Re: [PATCH bpf] bpf: allow xadd only on aligned memory

On Fri, Feb 23, 2018 at 10:29:05PM +0100, Daniel Borkmann wrote:
> The requirements around atomic_add() / atomic64_add() resp. their
> JIT implementations differ across architectures. E.g. while x86_64
> seems just fine with BPF's xadd on unaligned memory, on arm64 it
> triggers via interpreter but also JIT the following crash:
> 
>   [  830.864985] Unable to handle kernel paging request at virtual address ffff8097d7ed6703
>   [...]
>   [  830.916161] Internal error: Oops: 96000021 [#1] SMP
>   [  830.984755] CPU: 37 PID: 2788 Comm: test_verifier Not tainted 4.16.0-rc2+ #8
>   [  830.991790] Hardware name: Huawei TaiShan 2280 /BC11SPCD, BIOS 1.29 07/17/2017
>   [  830.998998] pstate: 80400005 (Nzcv daif +PAN -UAO)
>   [  831.003793] pc : __ll_sc_atomic_add+0x4/0x18
>   [  831.008055] lr : ___bpf_prog_run+0x1198/0x1588
>   [  831.012485] sp : ffff00001ccabc20
>   [  831.015786] x29: ffff00001ccabc20 x28: ffff8017d56a0f00
>   [  831.021087] x27: 0000000000000001 x26: 0000000000000000
>   [  831.026387] x25: 000000c168d9db98 x24: 0000000000000000
>   [  831.031686] x23: ffff000008203878 x22: ffff000009488000
>   [  831.036986] x21: ffff000008b14e28 x20: ffff00001ccabcb0
>   [  831.042286] x19: ffff0000097b5080 x18: 0000000000000a03
>   [  831.047585] x17: 0000000000000000 x16: 0000000000000000
>   [  831.052885] x15: 0000ffffaeca8000 x14: 0000000000000000
>   [  831.058184] x13: 0000000000000000 x12: 0000000000000000
>   [  831.063484] x11: 0000000000000001 x10: 0000000000000000
>   [  831.068783] x9 : 0000000000000000 x8 : 0000000000000000
>   [  831.074083] x7 : 0000000000000000 x6 : 000580d428000000
>   [  831.079383] x5 : 0000000000000018 x4 : 0000000000000000
>   [  831.084682] x3 : ffff00001ccabcb0 x2 : 0000000000000001
>   [  831.089982] x1 : ffff8097d7ed6703 x0 : 0000000000000001
>   [  831.095282] Process test_verifier (pid: 2788, stack limit = 0x0000000018370044)
>   [  831.102577] Call trace:
>   [  831.105012]  __ll_sc_atomic_add+0x4/0x18
>   [  831.108923]  __bpf_prog_run32+0x4c/0x70
>   [  831.112748]  bpf_test_run+0x78/0xf8
>   [  831.116224]  bpf_prog_test_run_xdp+0xb4/0x120
>   [  831.120567]  SyS_bpf+0x77c/0x1110
>   [  831.123873]  el0_svc_naked+0x30/0x34
>   [  831.127437] Code: 97fffe97 17ffffec 00000000 f9800031 (885f7c31)
> 
> Reason for this is because memory is required to be aligned. In
> case of BPF, we always enforce alignment in terms of stack access,
> but not when accessing map values or packet data when the underlying
> arch (e.g. arm64) has CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS set.
> 
> xadd on packet data that is local to us anyway is just wrong, so
> forbid this case entirely. The only place where xadd makes sense in
> fact are map values; xadd on stack is wrong as well, but it's been
> around for much longer. Specifically enforce strict alignment in case
> of xadd, so that we handle this case generically and avoid such crashes
> in the first place.
> 
> Fixes: 17a5267067f3 ("bpf: verifier (add verifier core)")
> Signed-off-by: Daniel Borkmann <daniel@...earbox.net>

Nice catch!
Applied to bpf tree, Thanks Daniel.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ