[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4Bzb-nqK0Z=GaWWejriSqqGd6D5Cz_w689N7_51D+daGyvw@mail.gmail.com>
Date: Fri, 19 Jun 2020 11:41:18 -0700
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: Daniel Borkmann <daniel@...earbox.net>
Cc: John Fastabend <john.fastabend@...il.com>,
Andrii Nakryiko <andriin@...com>, bpf <bpf@...r.kernel.org>,
Networking <netdev@...r.kernel.org>,
Alexei Starovoitov <ast@...com>,
Kernel Team <kernel-team@...com>
Subject: Re: [PATCH bpf-next 1/2] bpf: switch most helper return values from
32-bit int to 64-bit long
On Fri, Jun 19, 2020 at 6:08 AM Daniel Borkmann <daniel@...earbox.net> wrote:
>
> On 6/19/20 2:39 AM, John Fastabend wrote:
> > John Fastabend wrote:
> >> Andrii Nakryiko wrote:
> >>> On Thu, Jun 18, 2020 at 11:58 AM John Fastabend
> >>> <john.fastabend@...il.com> wrote:
> >
> > [...]
> >
> >>> That would be great. Self-tests do work, but having more testing with
> >>> real-world application would certainly help as well.
> >>
> >> Thanks for all the follow up.
> >>
> >> I ran the change through some CI on my side and it passed so I can
> >> complain about a few shifts here and there or just update my code or
> >> just not change the return types on my side but I'm convinced its OK
> >> in most cases and helps in some so...
> >>
> >> Acked-by: John Fastabend <john.fastabend@...il.com>
> >
> > I'll follow this up with a few more selftests to capture a couple of our
> > patterns. These changes are subtle and I worry a bit that additional
> > <<,s>> pattern could have the potential to break something.
> >
> > Another one we didn't discuss that I found in our code base is feeding
> > the output of a probe_* helper back into the size field (after some
> > alu ops) of subsequent probe_* call. Unfortunately, the tests I ran
> > today didn't cover that case.
> >
> > I'll put it on the list tomorrow and encode these in selftests. I'll
> > let the mainainers decide if they want to wait for those or not.
>
> Given potential fragility on verifier side, my preference would be that we
> have the known variations all covered in selftests before moving forward in
> order to make sure they don't break in any way. Back in [0] I've seen mostly
> similar cases in the way John mentioned in other projects, iirc, sysdig was
> another one. If both of you could hack up the remaining cases we need to
> cover and then submit a combined series, that would be great. I don't think
> we need to rush this optimization w/o necessary selftests.
There is no rush, but there is also no reason to delay it. I'd rather
land it early in the libbpf release cycle and let people try it in
their prod environments, for those concerned about such code patterns.
I don't have a list of all the patterns that we might need to test.
Going through all open-source BPF source code to identify possible
patterns and then coding them up in minimal selftests is a bit too
much for me, honestly. Additionally, some of those patterns will most
probably be broken in no-ALU32 and making them work with assembly and
other clever tricks is actually where the majority of time usually
goes. Also, simple selftests might not actually trigger pathological
codegen cases (because in a lot of cases register spill/pressure
triggers different codegen patterns). So I just don't believe we can
have a full piece of mind, regardless of how many selftests we add.
This test_varlen selftest is a simplification of a production code
we've had for a long while. We never bothered to contribute it as a
selftest before, which I'd say is our fault as users of BPF. Anyone
interested in ensuring regressions get detected for the way they write
BPF code, should distill them into selftests and contribute to our
test suite (like we did with PyPerf, Strobemeta, and how Jakub
Sitnicki did recently with his program).
So sure, maintainers might decide to not land this because of
potential regressions, but I tried to do my best to explain why there
shouldn't be really regressions (after all, int -> long reflects
*reality*, where everything is u64/s64 on return from BPF helper),
apart from actually testing for two patterns I knew about.
After all, even in case of regression, doing `int bla =
(int)bpf_helper_whatever(...);` is in theory equivalent to what we had
before, so it's an easy fix. Reality might require an extra compiler
barrier after that to force Clang to emit casting instructions sooner,
but that's another story.
>
> Thanks everyone,
> Daniel
>
> [0] https://lore.kernel.org/bpf/20200421125822.14073-1-daniel@iogearbox.net/
Powered by blists - more mailing lists