lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ac11345e-3036-4f88-96ab-9cff84b5d9ea@fb.com>
Date:   Thu, 26 Mar 2020 16:14:10 -0700
From:   Yonghong Song <yhs@...com>
To:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Lorenz Bauer <lmb@...udflare.com>
CC:     Joe Stringer <joe@...d.net.nz>, bpf <bpf@...r.kernel.org>,
        Networking <netdev@...r.kernel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Alexei Starovoitov <ast@...nel.org>,
        Eric Dumazet <eric.dumazet@...il.com>,
        Martin Lau <kafai@...com>
Subject: Re: call for bpf progs. Re: [PATCHv2 bpf-next 5/5] selftests: bpf:
 add test for sk_assign



On 3/26/20 2:07 PM, Alexei Starovoitov wrote:
> On Thu, Mar 26, 2020 at 10:13:31AM +0000, Lorenz Bauer wrote:
>>>> +
>>>> +     if (ipv4) {
>>>> +             if (tuple->ipv4.dport != bpf_htons(4321))
>>>> +                     return TC_ACT_OK;
>>>> +
>>>> +             ln.ipv4.daddr = bpf_htonl(0x7f000001);
>>>> +             ln.ipv4.dport = bpf_htons(1234);
>>>> +
>>>> +             sk = bpf_skc_lookup_tcp(skb, &ln, sizeof(ln.ipv4),
>>>> +                                     BPF_F_CURRENT_NETNS, 0);
>>>> +     } else {
>>>> +             if (tuple->ipv6.dport != bpf_htons(4321))
>>>> +                     return TC_ACT_OK;
>>>> +
>>>> +             /* Upper parts of daddr are already zero. */
>>>> +             ln.ipv6.daddr[3] = bpf_htonl(0x1);
>>>> +             ln.ipv6.dport = bpf_htons(1234);
>>>> +
>>>> +             sk = bpf_skc_lookup_tcp(skb, &ln, sizeof(ln.ipv6),
>>>> +                                     BPF_F_CURRENT_NETNS, 0);
>>>> +     }
>>>> +
>>>> +     /* We can't do a single skc_lookup_tcp here, because then the compiler
>>>> +      * will likely spill tuple_len to the stack. This makes it lose all
>>>> +      * bounds information in the verifier, which then rejects the call as
>>>> +      * unsafe.
>>>> +      */
>>>
>>> This is a known issue. For scalars, only constant is restored properly
>>> in verifier at this moment. I did some hacking before to enable any
>>> scalars. The fear is this will make pruning performs worse. More
>>> study is needed here.
>>
>> Of topic, but: this is actually one of the most challenging issues for
>> us when writing
>> BPF. It forces us to have very deep call graphs to hopefully avoid clang
>> spilling the constants. Please let me know if I can help in any way.
> 
> Thanks for bringing this up.
> Yonghong, please correct me if I'm wrong.

Yes. The summary below is correct. For reference, the below bcc issue
documents some of my investigation:
   https://github.com/iovisor/bcc/issues/2463

> I think you've experimented with tracking spilled constants. The first issue
> came with spilling of 4 byte constant. The verifier tracks 8 byte slots and
> lots of places assume that slot granularity. It's not clear yet how to refactor
> the verifier. Ideas, help are greatly appreciated.

I cannot remember exactly what I did then. Probably remember the spilled 
size too. Since the hack is never peer reviewed, maybe my approach has bugs.

> The second concern was pruning, but iirc the experiments were inconclusive.
> selftests/bpf only has old fb progs. Hence, I think, the step zero is for
> everyone to contribute their bpf programs written in C. If we have both
> cilium and cloudflare progs as selftests it will help a lot to guide such long
> lasting verifier decisions.

Yes, this is inconclusive and I did not do any active investigation here
since just enhancing the non-const spill won't resolve the above issue.
But totally agree that if we had an implementation, we should measure
its impact on verifier speed.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ