[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170419171753.GA12838@ast-mbp.thefacebook.com>
Date: Wed, 19 Apr 2017 10:17:56 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Andy Gospodarek <andy@...yhouse.net>
Cc: David Miller <davem@...emloft.net>, michael.chan@...adcom.com,
netdev@...r.kernel.org, xdp-newbies@...r.kernel.org
Subject: Re: [PATCH v4 net-next RFC] net: Generic XDP
On Wed, Apr 19, 2017 at 10:29:03AM -0400, Andy Gospodarek wrote:
>
> I ran this on top of a card that uses the bnxt_en driver on a desktop
> class system with an i7-6700 CPU @ 3.40GHz, sending a single stream of
> UDP traffic with flow control disabled and saw the following (all stats
> in Million PPS).
>
> xdp1 xdp2 xdp_tx_tunnel
> Generic XDP 7.8 5.5 (1.3 actual) 4.6 (1.1 actual)
> Optimized XDP 11.7 9.7 4.6
Nice! Thanks for testing.
> One thing to note is that the Generic XDP case shows some different
> results for reported by the application vs actual (seen on the wire). I
> did not debug where the drops are happening and what counter needs to be
> incremented to note this -- I'll add that to my TODO list. The
> Optimized XDP case does not have a difference in reported vs actual
> frames on the wire.
The missed packets are probably due to xmit queue being full.
We need 'xdp_tx_full' counter in:
+ if (free_skb) {
+ trace_xdp_exception(dev, xdp_prog, XDP_TX);
+ kfree_skb(skb);
+ }
like in-driver xdp does.
It's surprising that tx becomes full so often. May be bnxt specific behavior?
> I agree with all those who have asserted that this is great tool for
> those that want to get started with XDP but do not have hardware, so I'd
> say it's ready to have the 'RFC' tag dropped. Thanks for pushing this
> forward, Dave! :-)
+1
Powered by blists - more mailing lists