lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5e7ccee5b56f6_65132acbbe7fc5c425@john-XPS-13-9370.notmuch>
Date:   Thu, 26 Mar 2020 08:48:53 -0700
From:   John Fastabend <john.fastabend@...il.com>
To:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        John Fastabend <john.fastabend@...il.com>
Cc:     ecree@...arflare.com, yhs@...com, daniel@...earbox.net,
        netdev@...r.kernel.org, bpf@...r.kernel.org
Subject: Re: [bpf-next PATCH 07/10] bpf: test_verifier, bpf_get_stack return
 value add <0

Alexei Starovoitov wrote:
> On Tue, Mar 24, 2020 at 10:39:55AM -0700, John Fastabend wrote:
> > diff --git a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
> > index f24d50f..24aa6a0 100644
> > --- a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
> > +++ b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
> > @@ -7,7 +7,7 @@
> >  	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
> >  	BPF_LD_MAP_FD(BPF_REG_1, 0),
> >  	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
> > -	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 28),
> > +	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 29),
> >  	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
> >  	BPF_MOV64_IMM(BPF_REG_9, sizeof(struct test_val)),
> >  	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
> > @@ -16,6 +16,7 @@
> >  	BPF_MOV64_IMM(BPF_REG_4, 256),
> >  	BPF_EMIT_CALL(BPF_FUNC_get_stack),
> >  	BPF_MOV64_IMM(BPF_REG_1, 0),
> > +	BPF_JMP32_REG(BPF_JSLT, BPF_REG_0, BPF_REG_1, 20),
> >  	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
> >  	BPF_ALU64_IMM(BPF_LSH, BPF_REG_8, 32),
> >  	BPF_ALU64_IMM(BPF_ARSH, BPF_REG_8, 32),
> 
> Yep. The test is wrong.
> But this is cheating ;)
> JSLT should be after shifts.
> 
> The test should have been written as:
> diff --git a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
> index f24d50f09dbe..be0758c1bfbd 100644
> --- a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
> +++ b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
> @@ -19,7 +19,7 @@
>         BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
>         BPF_ALU64_IMM(BPF_LSH, BPF_REG_8, 32),
>         BPF_ALU64_IMM(BPF_ARSH, BPF_REG_8, 32),
> -       BPF_JMP_REG(BPF_JSLT, BPF_REG_1, BPF_REG_8, 16),
> +       BPF_JMP_REG(BPF_JSGT, BPF_REG_1, BPF_REG_8, 16),
>         BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_8),
>         BPF_MOV64_REG(BPF_REG_2, BPF_REG_7),
>         BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_8),
> 
> That was the intent of the test.
> But in such form it won't work with the current patch set,
> but it should.
> 
> More so the patches 1 and 5 make test_progs pass,
> but test_progs-no_alu32 fails tests 20 and 61.
> Because clang generates the following:
> 14: (85) call bpf_get_stack#67
> 15: (b7) r1 = 0
> 16: (bf) r8 = r0
> 17: (67) r8 <<= 32
> 18: (c7) r8 s>>= 32
> 19: (6d) if r1 s> r8 goto pc+16
> 
> (which is exactly what bpf_get_stack.c test tried to capture)

Ah OK well its good we have the pattern captured in verifier tests
so we don't have to rely yon clang generating it.

> 
> I guess no_alu32 may be passing for you because you have that special
> clang optimization that replaces <<32 s>>32 with more efficient insn,
> but not everyone will be using new clang, so we have to teach verifier
> recognize <<32 s>>32.

Ah dang yes I added that patch to our toolchain because it helps so
much. But I need to remember to pull from upstream branch for selftests.
I would prefer we apply that patch upstream but maybe we need to encode
the old behavior in some more verifier test cases so we avoid breaking
it like I did here.

> Thankfully with your new infra for s32 it should be easy to do.
> In scalar_min_max_lsh() we don't need to do dst_reg->smax_value = S64_MAX;
> When we have _positive_ s32_max_value
> we can set smax_value = s32_max_value << 32
> and I think that will be correct.
> scalar_min_max_arsh() shouldn't need any changes.
> And the verifier will restore valid smax_value after <<32 s>>32 sequence.
> And this test will pass (after fixing s/JSLT/JSGT/) and test_progs-no_alu32
> will do too. wdyt?

Looks correct to me for the specific case of <<32 seeing its common
enough special case for it makes sense. I'll add a patch for this
with above explanation.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ