lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200126134933.2514b2ab@carbon>
Date:   Sun, 26 Jan 2020 13:49:33 +0100
From:   Jesper Dangaard Brouer <jbrouer@...hat.com>
To:     David Ahern <dsahern@...il.com>
Cc:     Toke Høiland-Jørgensen <toke@...hat.com>,
        Jakub Kicinski <kuba@...nel.org>,
        David Ahern <dsahern@...nel.org>, netdev@...r.kernel.org,
        prashantbhole.linux@...il.com, jasowang@...hat.com,
        davem@...emloft.net, mst@...hat.com, toshiaki.makita1@...il.com,
        daniel@...earbox.net, john.fastabend@...il.com, ast@...nel.org,
        kafai@...com, songliubraving@...com, yhs@...com, andriin@...com,
        David Ahern <dahern@...italocean.com>
Subject: Re: [PATCH bpf-next 03/12] net: Add IFLA_XDP_EGRESS for XDP
 programs in the egress path

On Sat, 25 Jan 2020 18:43:36 -0700
David Ahern <dsahern@...il.com> wrote:

> On 1/24/20 8:36 AM, Toke Høiland-Jørgensen wrote:
> > Jakub Kicinski <kuba@...nel.org> writes:
> >   
> >> On Thu, 23 Jan 2020 14:33:42 -0700, David Ahern wrote:  
> >>> On 1/23/20 4:35 AM, Toke Høiland-Jørgensen wrote:  
> >>>> David Ahern <dsahern@...nel.org> writes:  
> >>>>> From: David Ahern <dahern@...italocean.com>
> >>>>>
> >>>>> Add IFLA_XDP_EGRESS to if_link.h uapi to handle an XDP program attached
> >>>>> to the egress path of a device. Add rtnl_xdp_egress_fill and helpers as
> >>>>> the egress counterpart to the existing rtnl_xdp_fill. The expectation
> >>>>> is that going forward egress path will acquire the various levels of
> >>>>> attach - generic, driver and hardware.    
> >>>>
> >>>> How would a 'hardware' attach work for this? As I said in my reply to
> >>>> the previous patch, isn't this explicitly for emulating XDP on the other
> >>>> end of a point-to-point link? How would that work with offloaded
> >>>> programs?  
> >>>
> >>> Nothing about this patch set is limited to point-to-point links.  
> >>
> >> I struggle to understand of what the expected semantics of this new
> >> hook are. Is this going to be run on all frames sent to the device
> >> from the stack? All frames from the stack and from XDP_REDIRECT?
> >>
> >> A little hard to figure out the semantics when we start from a funky
> >> device like tun :S  
> > 
> > Yes, that is also why I found this a bit weird. We have discussed plans
> > for an XDP TX hook before:
> > https://github.com/xdp-project/xdp-project/blob/master/xdp-project.org#xdp-hook-at-tx
> > 
> > That TX hook would run for everything at TX, but it would be a separate
> > program type with its own metadata access. Whereas the idea with this
> > series (seemed to me) to be just to be able to "emulate" run a regular
> > RX-side XDP program on egress for devices where this makes sense.
> > 
> > If this series is not meant to implement that "emulation", but rather be
> > usable for all devices, I really think we should go straight for the
> > full TX hook as discussed earlier...
> >   
> 
> The first patch set from Jason and Prashant started from the perspective
> of offloading XDP programs for a guest. Independently, I was looking at
> XDP in the TX path (now referred to as egress to avoid confusion with
> the XDP_TX return type). Jason and Prashant were touching some of the
> same code paths in the tun driver that I needed for XDP in the Tx path,
> so we decided to consolidate and have XDP egress done first and then
> offload of VMs as a followup. Offload in virtio_net can be done very
> similar to how it is done in nfp -- the program is passed to the host as
> a hardware level attach mode, and the driver verifies the program can be
> offloaded (e.g., does not contain helpers that expose host specific data
> like the fib lookup helper).
> 
> At this point, you need to stop thinking solely from the perspective of
> tun or tap and VM offload; think about this from the ability to run an
> XDP program on egress path at an appropriate place in the NIC driver
> that covers both skbs and xdp_frames (e.g., on a REDIRECT).

Yes, please. I want this NIC TX hook to see both SKBs and xdp_frames.


> This has
> been discussed before as a need (e.g, Toke's reference above), and I am
> trying to get this initial support done.
> 
> I very much wanted to avoid copy-paste-modify for the entire XDP API for
> this. For the most part XDP means ebpf at the NIC driver / hardware
> level (obviously with the exception of generic mode). The goal is
> tempered with the need for the verifier to reject rx entries in the
> xdp_md context. Hence the reason for use of an attach_type - existing
> infrastructure to test and reject the accesses.
> 
> That said, Martin's comment throws a wrench in the goal: if the existing
> code does not enforce expected_attach_type then that option can not be
> used in which case I guess I have to go with a new program type
> (BPF_PROG_TYPE_XDP_EGRESS) which takes a new context (xdp_egress_md),
> has different return codes, etc.

Taking about return codes.  Does XDP the return codes make sense for
this EGRESS hook? (if thinking about this being egress on the real NIC).

E.g. XDP_REDIRECT would have to be supported, which is interesting, but
also have implications (like looping packets).

E.g. what is the semantics/action of XDP_TX return code?

E.g. I'm considering adding a XDP_CONGESTED return code that can cause
backpressure towards qdisc layer.

Also think about that if this EGRESS hook uses standard prog type for
XDP (BPF_PROG_TYPE_XDP), then we need to convert xdp_frame to xdp_buff
(and also convert SKBs to xdp_buff).

Are we sure that reusing the same bpf prog type is the right choice?

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ