lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200126141141.0b773aba@cakuba>
Date:   Sun, 26 Jan 2020 14:11:41 -0800
From:   Jakub Kicinski <kuba@...nel.org>
To:     David Ahern <dsahern@...il.com>
Cc:     Toke Høiland-Jørgensen <toke@...hat.com>,
        David Ahern <dsahern@...nel.org>, netdev@...r.kernel.org,
        prashantbhole.linux@...il.com, jasowang@...hat.com,
        davem@...emloft.net, jbrouer@...hat.com, mst@...hat.com,
        toshiaki.makita1@...il.com, daniel@...earbox.net,
        john.fastabend@...il.com, ast@...nel.org, kafai@...com,
        songliubraving@...com, yhs@...com, andriin@...com,
        David Ahern <dahern@...italocean.com>
Subject: Re: [PATCH bpf-next 03/12] net: Add IFLA_XDP_EGRESS for XDP
 programs in the egress path

On Sat, 25 Jan 2020 18:43:36 -0700, David Ahern wrote:
> On 1/24/20 8:36 AM, Toke Høiland-Jørgensen wrote:
> > Jakub Kicinski <kuba@...nel.org> writes:
> >> On Thu, 23 Jan 2020 14:33:42 -0700, David Ahern wrote:  
> >>> On 1/23/20 4:35 AM, Toke Høiland-Jørgensen wrote:  
> >>>> David Ahern <dsahern@...nel.org> writes:  
> >>>>> From: David Ahern <dahern@...italocean.com>
> >>>>>
> >>>>> Add IFLA_XDP_EGRESS to if_link.h uapi to handle an XDP program attached
> >>>>> to the egress path of a device. Add rtnl_xdp_egress_fill and helpers as
> >>>>> the egress counterpart to the existing rtnl_xdp_fill. The expectation
> >>>>> is that going forward egress path will acquire the various levels of
> >>>>> attach - generic, driver and hardware.    
> >>>>
> >>>> How would a 'hardware' attach work for this? As I said in my reply to
> >>>> the previous patch, isn't this explicitly for emulating XDP on the other
> >>>> end of a point-to-point link? How would that work with offloaded
> >>>> programs?  
> >>>
> >>> Nothing about this patch set is limited to point-to-point links.  
> >>
> >> I struggle to understand of what the expected semantics of this new
> >> hook are. Is this going to be run on all frames sent to the device
> >> from the stack? All frames from the stack and from XDP_REDIRECT?
> >>
> >> A little hard to figure out the semantics when we start from a funky
> >> device like tun :S  
> > 
> > Yes, that is also why I found this a bit weird. We have discussed plans
> > for an XDP TX hook before:
> > https://github.com/xdp-project/xdp-project/blob/master/xdp-project.org#xdp-hook-at-tx
> > 
> > That TX hook would run for everything at TX, but it would be a separate
> > program type with its own metadata access. Whereas the idea with this
> > series (seemed to me) to be just to be able to "emulate" run a regular
> > RX-side XDP program on egress for devices where this makes sense.
> > 
> > If this series is not meant to implement that "emulation", but rather be
> > usable for all devices, I really think we should go straight for the
> > full TX hook as discussed earlier...
> 
> The first patch set from Jason and Prashant started from the perspective
> of offloading XDP programs for a guest. Independently, I was looking at
> XDP in the TX path (now referred to as egress to avoid confusion with
> the XDP_TX return type). 

I looked through the commit message and the cover letter again, and you
never explain why you need the egress hook. Could you please clarify
your needs? If it's container-related maybe what Daniel talked about at
last netconf could be a better solution?

I can't quite square the concept of XDP which started as close to the
metal BPF hook for HW drivers, and this heavily SW-focused addition.

> Jason and Prashant were touching some of the
> same code paths in the tun driver that I needed for XDP in the Tx path,
> so we decided to consolidate and have XDP egress done first and then
> offload of VMs as a followup. Offload in virtio_net can be done very
> similar to how it is done in nfp -- the program is passed to the host as
> a hardware level attach mode, and the driver verifies the program can be
> offloaded (e.g., does not contain helpers that expose host specific data
> like the fib lookup helper).

<rant>

I'd ask to please never compare this work to the nfp offload. Netronome
was able to open up their NIC down to the instruction set level, with
the JIT in tree and rest of the FW open source:

https://github.com/Netronome/nic-firmware/

and that work is now used as precedent for something that risks turning
the kernel into a damn control plane for proprietary clouds?

I can see how they may seem similar in operational terms, but for
people who care about open source they couldn't be more different.

</rant>

> At this point, you need to stop thinking solely from the perspective of
> tun or tap and VM offload; think about this from the ability to run an
> XDP program on egress path at an appropriate place in the NIC driver
> that covers both skbs and xdp_frames (e.g., on a REDIRECT). This has
> been discussed before as a need (e.g, Toke's reference above), and I am
> trying to get this initial support done.

TX hook related to queuing is a very different beast than just a RX
hook flipped. The queuing is a problem that indeed needs work, but just
adding a mirror RX hook does not solve that, and establishes semantics
which may be counter productive. That's why I was asking for clear
semantics.

> I very much wanted to avoid copy-paste-modify for the entire XDP API for
> this. For the most part XDP means ebpf at the NIC driver / hardware
> level (obviously with the exception of generic mode). The goal is
> tempered with the need for the verifier to reject rx entries in the
> xdp_md context. Hence the reason for use of an attach_type - existing
> infrastructure to test and reject the accesses.

For the offload host rx queue == dev PCI tx queue and vice versa.
So other than the name the rejection makes no sense. Just add a union
to xdp_md so both tx and rx names can be used.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ