lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51197ebf-a3d8-a3ae-0389-d7e4dae3e833@solarflare.com>
Date:   Fri, 2 Nov 2018 15:42:49 +0000
From:   Edward Cree <ecree@...arflare.com>
To:     Arnaldo Carvalho de Melo <acme@...nel.org>
CC:     Yonghong Song <yhs@...com>, Daniel Borkmann <daniel@...earbox.net>,
        "Jiri Olsa" <jolsa@...hat.com>, Martin Lau <kafai@...com>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Linux Networking Development Mailing List 
        <netdev@...r.kernel.org>
Subject: Re: Help with the BPF verifier

On 02/11/18 15:02, Arnaldo Carvalho de Melo wrote:
> Yeah, didn't work as well: 

> And the -vv in 'perf trace' didn't seem to map to further details in the
> output of the verifier debug:
Yeah for log_level 2 you probably need to make source-level changes to either
 perf or libbpf (I think the latter).  It's annoying that essentially no tools
 plumb through an option for that, someone should fix them ;-)

> libbpf: -- BEGIN DUMP LOG ---
> libbpf: 
> 0: (bf) r6 = r1
> 1: (bf) r1 = r10
> 2: (07) r1 += -328
> 3: (b7) r7 = 64
> 4: (b7) r2 = 64
> 5: (bf) r3 = r6
> 6: (85) call bpf_probe_read#4
> 7: (79) r1 = *(u64 *)(r10 -320)
> 8: (15) if r1 == 0x101 goto pc+4
>  R0=inv(id=0) R1=inv(id=0) R6=ctx(id=0,off=0,imm=0) R7=inv64 R10=fp0,call_-1
> 9: (55) if r1 != 0x2 goto pc+22
>  R0=inv(id=0) R1=inv2 R6=ctx(id=0,off=0,imm=0) R7=inv64 R10=fp0,call_-1
> 10: (bf) r1 = r6
> 11: (07) r1 += 16
> 12: (05) goto pc+2
> 15: (79) r3 = *(u64 *)(r1 +0)
> dereference of modified ctx ptr R1 off=16 disallowed
Aha, we at least got a different error message this time.
And indeed llvm has done that optimisation, rather than the more obvious
11: r3 = *(u64 *)(r1 +16)
 because it wants to have lots of reads share a single insn.  You may be able
 to defeat that optimisation by adding compiler barriers, idk.  Maybe someone
 with llvm knowledge can figure out how to stop it (ideally, llvm would know
 when it's generating for bpf backend and not do that).  -O0?  ¯\_(ツ)_/¯
Alternatively, your prog looks short enough that maybe you could kick the C
 habit and write it directly in eBPF asm, that way no-one is optimising things
 behind your back.  (I realise this option won't appeal to everyone ;-)
The reason the verifier disallows this, iirc, is because it needs to be able
 to rewrite the offsets on ctx accesses (see convert_ctx_accesses()) in case
 underlying kernel struct doesn't match the layout of the ctx ABI.  To do this
 it needs the ctx offset to live entirely in the insn doing the access,
 otherwise different paths could lead to the same insn accessing different ctx
 offsets with different fixups needed — can't be done.

-Ed

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ