lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87wodnq80i.fsf@toke.dk>
Date:   Wed, 02 Oct 2019 21:09:33 +0200
From:   Toke Høiland-Jørgensen <toke@...hat.com>
To:     John Fastabend <john.fastabend@...il.com>,
        Daniel Borkmann <daniel@...earbox.net>
Cc:     Alexei Starovoitov <ast@...nel.org>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
        Marek Majkowski <marek@...udflare.com>,
        Lorenz Bauer <lmb@...udflare.com>,
        David Miller <davem@...emloft.net>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        netdev@...r.kernel.org, bpf@...r.kernel.org
Subject: RE: [PATCH bpf-next 0/9] xdp: Support multiple programs on a single interface through chain calls

John Fastabend <john.fastabend@...il.com> writes:

> Toke Høiland-Jørgensen wrote:
>> This series adds support for executing multiple XDP programs on a single
>> interface in sequence, through the use of chain calls, as discussed at the Linux
>> Plumbers Conference last month:
>> 
>> https://linuxplumbersconf.org/event/4/contributions/460/
>> 
>> # HIGH-LEVEL IDEA
>> 
>> The basic idea is to express the chain call sequence through a special map type,
>> which contains a mapping from a (program, return code) tuple to another program
>> to run in next in the sequence. Userspace can populate this map to express
>> arbitrary call sequences, and update the sequence by updating or replacing the
>> map.
>> 
>> The actual execution of the program sequence is done in bpf_prog_run_xdp(),
>> which will lookup the chain sequence map, and if found, will loop through calls
>> to BPF_PROG_RUN, looking up the next XDP program in the sequence based on the
>> previous program ID and return code.
>> 
>> An XDP chain call map can be installed on an interface by means of a new netlink
>> attribute containing an fd pointing to a chain call map. This can be supplied
>> along with the XDP prog fd, so that a chain map is always installed together
>> with an XDP program.
>> 
>> # PERFORMANCE
>> 
>> I performed a simple performance test to get an initial feel for the overhead of
>> the chain call mechanism. This test consists of running only two programs in
>> sequence: One that returns XDP_PASS and another that returns XDP_DROP. I then
>> measure the drop PPS performance and compare it to a baseline of just a single
>> program that only returns XDP_DROP.
>> 
>> For comparison, a test case that uses regular eBPF tail calls to sequence two
>> programs together is also included. Finally, because 'perf' showed that the
>> hashmap lookup was the largest single source of overhead, I also added a test
>> case where I removed the jhash() call from the hashmap code, and just use the
>> u32 key directly as an index into the hash bucket structure.
>> 
>> The performance for these different cases is as follows (with retpolines disabled):
>
> retpolines enabled would also be interesting.

Yes, I'll try that as well.

>> 
>> | Test case                       | Perf      | Add. overhead | Total overhead |
>> |---------------------------------+-----------+---------------+----------------|
>> | Before patch (XDP DROP program) | 31.0 Mpps |               |                |
>> | After patch (XDP DROP program)  | 28.9 Mpps |        2.3 ns |         2.3 ns |
>
> IMO even 1 Mpps overhead is too much for a feature that is primarily
> about ease of use. Sacrificing performance to make userland a bit
> easier is hard to justify for me when XDP _is_ singularly about
> performance. Also that is nearly 10% overhead which is fairly large.
> So I think going forward the performance gab needs to be removed.

It's not just about ease of use. If we really want XDP to catch on and
be integrated into third-party applications, we *need* to solve the
multi-program problem. Otherwise users are going to have to choose
between running their DDOS protection application and their IDS; which
means no one is going to be able to rely on XDP support, and so no one
is going to bother to implement it. Or at least, I'm afraid it'll end up
that way.

That being said, I agree that the performance impact should be kept at
an absolute minimum. In terms of code it already is: it's a single if
statement to check if a chain map is loaded. I haven't looked into why
that takes 2.3 ns to do yet. I suspect it may just be because we're
taking a cache miss: the chain map pointer is not stored anywhere near
the rest of the XDP data structures...

>> | XDP tail call                   | 26.6 Mpps |        3.0 ns |         5.3 ns |
>> | XDP chain call (no jhash)       | 19.6 Mpps |       13.4 ns |        18.7 ns |
>> | XDP chain call (this series)    | 17.0 Mpps |        7.9 ns |        26.6 ns |
>> 
>> From this it is clear that while there is some overhead from this mechanism; but
>> the jhash removal example indicates that it is probably possible to optimise the
>> code to the point where the overhead becomes low enough that it is acceptable.
>
> I'm missing why 'in theory' at least this can't be made as-fast as
> tail calls? Again I can't see why someone would lose 30% of their
> performance when a userland program could populate a tail call map for
> the same effect. Sure userland would also have to enforce some program
> standards/conventions but it could be done and at 30% overhead that
> pain is probably worth it IMO.
>
> My thinking though is if we are a bit clever chaining and tail calls
> could be performance-wise equivalent?

Oh, I totally agree that 30% overhead is way too much. I just
prioritised getting this to work and send it out for comments to get the
conversation going around the API before I spend too much time
optimising the performance bits.

I think it should be possible to get it pretty close to pure tail calls.
A tail call compiles down to a direct jump instruction without leaving
the BPF execution environment, though, so maybe not *exactly* as fast.
But I'd settle for an overhead of, say, a handful of nanoseconds
compared to a pure tail call. Certainly not the tens of nanoseconds I
get right now.

Also, bear in mind that an overhead of say 4-6 nanoseconds only
translates into such a large percentage-wise overhead in this test
because the XDP programs being run don't actually do anything - not even
touch the packet data. As the programs themselves get larger, the
percentage-wise overhead becomes smaller as well. So a test like this
shows the worst-case performance.

-Toke

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ