lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200309093932.2a738ab1@carbon>
Date:   Mon, 9 Mar 2020 09:39:32 +0100
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     John Fastabend <john.fastabend@...il.com>
Cc:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        bpf@...r.kernel.org, gamemann@...clan.com, lrizzo@...gle.com,
        netdev@...r.kernel.org, Daniel Borkmann <borkmann@...earbox.net>,
        Alexander Duyck <alexander.duyck@...il.com>,
        Ilias Apalodimas <ilias.apalodimas@...aro.org>,
        Toke Høiland-Jørgensen 
        <toke@...e.dk>, brouer@...hat.com
Subject: Re: [bpf-next PATCH] xdp: accept that XDP headroom isn't always
 equal XDP_PACKET_HEADROOM

On Fri, 06 Mar 2020 08:06:35 -0800
John Fastabend <john.fastabend@...il.com> wrote:

> Alexei Starovoitov wrote:
> > On Tue, Mar 03, 2020 at 12:46:58PM +0100, Jesper Dangaard Brouer wrote:  
[...]
> > > 
> > > Still for generic-XDP if headroom is less, still expand headroom to
> > > XDP_PACKET_HEADROOM as this is the default in most XDP drivers.
> > > 
> > > Tested on ixgbe with xdp_rxq_info --skb-mode and --action XDP_DROP:
> > > - Before: 4,816,430 pps
> > > - After : 7,749,678 pps
> > > (Note that ixgbe in native mode XDP_DROP 14,704,539 pps)
> > >   
> 
> But why do we care about generic-XDP performance? Seems users should
> just use XDP proper on ixgbe and i40e its supported.
>
[...]
> 
> Or just let ixgbe/i40e be slow? I guess I'm missing some context?

The context originates from an email thread[1] on XDP-newbies list, that
had a production setup (anycast routing of gaming traffic[3]) that used
XDP and they used XDP-generic (actually without realizing it).  They
were using Intel igb driver (that don't have native-XDP), and changing
to e.g. ixgbe (or i40e) is challenging given it requires physical access
to the PoP (Points of Presence) and upgrading to a 10G port at the PoP
also have costs associated.

Why not simply use TC-BPF (cls_bpf) instead of XDP.  I've actually been
promoting that more people should use TC-BPF, and also in combination[2].
The reason it makes sense to stick with XDP here is to allow them to
deploy the same software on their PoP servers, regardless of which
NIC driver is available.

Performance wise, I will admit that I've explicitly chosen not to
optimize XDP-generic, and I've even seen it as a good thing that we
have this reallocation penalty.  Given the uniform software deployment
argument and my measurements in[1] I've changed my mind.  For the igb
driver I'm not motivated to implement XDP-native, because a newer Intel
CPU can handle wirespeed even-with the reallocations, but it is just
wasteful to do these reallocations.  "Allowing" these 1Gbit/s NICs to
work more optimally with XDP-generic, will allow us to ignore
converting these drivers to XDP-native, and as HW gets upgraded they
will transition seamlessly to XDP-native.


[1] https://www.spinics.net/lists/xdp-newbies/msg01548.html
[2] https://github.com/xdp-project/xdp-cpumap-tc
[3] https://gitlab.com/Dreae/compressor/
-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ