lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170420001331.GB38173@ast-mbp.thefacebook.com>
Date:   Wed, 19 Apr 2017 17:13:33 -0700
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     Andy Gospodarek <andy@...yhouse.net>
Cc:     John Fastabend <john.fastabend@...il.com>,
        David Miller <davem@...emloft.net>, michael.chan@...adcom.com,
        netdev@...r.kernel.org, xdp-newbies@...r.kernel.org
Subject: Re: [PATCH v4 net-next RFC] net: Generic XDP

On Wed, Apr 19, 2017 at 04:25:43PM -0400, Andy Gospodarek wrote:
> On Wed, Apr 19, 2017 at 10:44:59AM -0700, John Fastabend wrote:
> > On 17-04-19 10:17 AM, Alexei Starovoitov wrote:
> > > On Wed, Apr 19, 2017 at 10:29:03AM -0400, Andy Gospodarek wrote:
> > >>
> > >> I ran this on top of a card that uses the bnxt_en driver on a desktop
> > >> class system with an i7-6700 CPU @ 3.40GHz, sending a single stream of
> > >> UDP traffic with flow control disabled and saw the following (all stats
> > >> in Million PPS).
> > >>
> > >>                 xdp1                xdp2            xdp_tx_tunnel
> > >> Generic XDP      7.8    5.5 (1.3 actual)         4.6 (1.1 actual)
> > >> Optimized XDP   11.7		     9.7                      4.6
> > > 
> > > Nice! Thanks for testing.
> > > 
> > >> One thing to note is that the Generic XDP case shows some different
> > >> results for reported by the application vs actual (seen on the wire).  I
> > >> did not debug where the drops are happening and what counter needs to be
> > >> incremented to note this -- I'll add that to my TODO list.  The
> > >> Optimized XDP case does not have a difference in reported vs actual
> > >> frames on the wire.
> > > 
> > > The missed packets are probably due to xmit queue being full.
> > > We need 'xdp_tx_full' counter in:
> > > +       if (free_skb) {
> > > +               trace_xdp_exception(dev, xdp_prog, XDP_TX);
> > > +               kfree_skb(skb);
> > > +       }
> > > like in-driver xdp does.
> > > It's surprising that tx becomes full so often. May be bnxt specific behavior?
> > 
> > hmm as a data point I get better numbers than 1.3Mpps running through the qdisc
> > layer with pktgen so seems like something is wrong with the driver perhaps? If
> 
> I get ~6.5Mpps on a single core with pktgen, so inconclusive for now....

may be your tx queue is simply smaller than rx queue?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ