lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAMOZA0LzKZczQBcGFO8q44QJ1=6rv-61nruqxRK4k05-gFWaGw@mail.gmail.com>
Date:   Fri, 6 Mar 2020 17:16:38 +0100
From:   Luigi Rizzo <lrizzo@...gle.com>
To:     John Fastabend <john.fastabend@...il.com>
Cc:     Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>,
        bpf@...r.kernel.org, gamemann@...clan.com,
        Network Development <netdev@...r.kernel.org>,
        Daniel Borkmann <borkmann@...earbox.net>,
        Alexander Duyck <alexander.duyck@...il.com>,
        Ilias Apalodimas <ilias.apalodimas@...aro.org>,
        Toke Høiland-Jørgensen <toke@...e.dk>
Subject: Re: [bpf-next PATCH] xdp: accept that XDP headroom isn't always equal XDP_PACKET_HEADROOM

On Fri, Mar 6, 2020 at 5:06 PM John Fastabend <john.fastabend@...il.com> wrote:
>
> Alexei Starovoitov wrote:
> > On Tue, Mar 03, 2020 at 12:46:58PM +0100, Jesper Dangaard Brouer wrote:
...
> > > Tested on ixgbe with xdp_rxq_info --skb-mode and --action XDP_DROP:
> > > - Before: 4,816,430 pps
> > > - After : 7,749,678 pps
> > > (Note that ixgbe in native mode XDP_DROP 14,704,539 pps)
> > >
>
> But why do we care about generic-XDP performance? Seems users should
> just use XDP proper on ixgbe and i40e its supported.

I think the point was to show the performance benefit of skipping the
normalization (admittedly for a specific workload, tinygrams;
my other patch to control xdpgeneric_linearize covered a different range
of packet sizes).

On a side note I think it would be more useful to report times in ns/pkt,
as they can be applied to other drivers too. Specifically here I would
have written:

  Before: average 207 ns/pkt (1s / 4.816 Mpps)
  After:  average 129 ns/pkt (1s / 7.750 Mpps)

cheers
luigi

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ