lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 22 Sep 2021 22:01:17 +0200
From:   Toke Høiland-Jørgensen <toke@...hat.com>
To:     Jakub Kicinski <kuba@...nel.org>
Cc:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Zvi Effron <zeffron@...tgames.com>,
        Lorenz Bauer <lmb@...udflare.com>,
        Lorenzo Bianconi <lbianconi@...hat.com>,
        Daniel Borkmann <daniel@...earbox.net>,
        John Fastabend <john.fastabend@...il.com>,
        Network Development <netdev@...r.kernel.org>,
        bpf <bpf@...r.kernel.org>
Subject: Re: Redux: Backwards compatibility for XDP multi-buff

Jakub Kicinski <kuba@...nel.org> writes:

> On Wed, 22 Sep 2021 00:20:19 +0200 Toke Høiland-Jørgensen wrote:
>> >> Neither of those are desirable outcomes, I think; and if we add a
>> >> separate "XDP multi-buff" switch, we might as well make it system-wide?  
>> >
>> > If we have an internal flag 'this driver supports multi-buf xdp' cannot we
>> > make xdp_redirect to linearize in case the packet is being redirected
>> > to non multi-buf aware driver (potentially with corresponding non mb aware xdp
>> > progs attached) from mb aware driver?  
>> 
>> Hmm, the assumption that XDP frames take up at most one page has been
>> fundamental from the start of XDP. So what does linearise mean in this
>> context? If we get a 9k packet, should we dynamically allocate a
>> multi-page chunk of contiguous memory and copy the frame into that, or
>> were you thinking something else?
>
> My $.02 would be to not care about redirect at all.
>
> It's not like the user experience with redirect is anywhere close 
> to amazing right now. Besides (with the exception of SW devices which
> will likely gain mb support quickly) mixed-HW setups are very rare.
> If the source of the redirect supports mb so will likely the target.

It's not about device support it's about XDP program support: If I run
an MB-aware XDP program on a physical interface and redirect the (MB)
frame into a container, and there's an XDP program running inside that
container that isn't MB-aware, bugs will ensue. Doesn't matter if the
veth driver itself supports MB...

We could leave that as a "don't do that, then" kind of thing, but that
was what we were proposing (as the "do nothing" option) and got some
pushback on, hence why we're having this conversation :)

-Toke

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ