[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20180913153656.GA31933@apalos>
Date: Thu, 13 Sep 2018 18:36:56 +0300
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: netdev@...r.kernel.org, jaswinder.singh@...aro.org,
ard.biesheuvel@...aro.org, masami.hiramatsu@...aro.org,
arnd@...db.de, mykyta.iziumtsev@...aro.org, bjorn.topel@...el.com,
magnus.karlsson@...el.com, daniel@...earbox.net, ast@...nel.org
Subject: Re: [net-next, PATCH 2/2, v2] net: socionext: add XDP support
On Thu, Sep 13, 2018 at 04:32:06PM +0200, Jesper Dangaard Brouer wrote:
> On Wed, 12 Sep 2018 12:29:15 +0300
> Ilias Apalodimas <ilias.apalodimas@...aro.org> wrote:
>
> > On Wed, Sep 12, 2018 at 11:25:24AM +0200, Jesper Dangaard Brouer wrote:
> > > On Wed, 12 Sep 2018 12:02:38 +0300
> > > Ilias Apalodimas <ilias.apalodimas@...aro.org> wrote:
> > >
> > > > static const struct net_device_ops netsec_netdev_ops = {
> > > > .ndo_init = netsec_netdev_init,
> > > > .ndo_uninit = netsec_netdev_uninit,
> > > > @@ -1430,6 +1627,7 @@ static const struct net_device_ops netsec_netdev_ops = {
> > > > .ndo_set_mac_address = eth_mac_addr,
> > > > .ndo_validate_addr = eth_validate_addr,
> > > > .ndo_do_ioctl = netsec_netdev_ioctl,
> > > > + .ndo_bpf = netsec_xdp,
> > > > };
> > > >
> > >
> > > You have not implemented ndo_xdp_xmit.
> > >
> > > Thus, you have "only" implemented the RX side of XDP_REDIRECT. Which
> > > allows you to do, cpumap and AF_XDP redirects, but not allowing other
> > > drivers to XDP send out this device.
> >
> > Correct, that was the planning, is ndo_xdp_xmit() needed for the patch or
> > is the patch message just misleading and i should change that ?
>
> Yes, I think you should ALSO implement ndo_xdp_xmit, maybe as a separate
> patch, but in the same series. (Our experience is that if we don't
> require this, people forget to complete this part of the XDP support).
Ok makes sense. Already started on that i should have something soon
>
> Also you XDP_TX is not optimal, as it (looks like) you flush TX on
> every send.
Yes i do, the driver is queueing packet by packet (in it's default skb
implemetation) so i just did the same. Agree it's far from optimal though
i'll see if i can change than on the next version
>
> BTW, do you have any performance numbers?
Yes XDP_TX is doing ~330kpps and XDP_REDIRECT ~340kpps(dropping packets)
using 64b packets. I am not really sure if this is a hardware limitation
due to only using a single queue. I used ./samples/bpf/xdpsock for AF_XDP
and ./samples/bpf/xdp2 for XDP_TX. I hope i am doing the right tests
The default Rx path is doing ~220kpps with the improved memory allocation
scheme so we do have some improvement although we are far away from line
rate
The default Tx seems to hang after some point with a txq full message so
i don't have any precice numbers for that
This change on the driver started as an investigation of using AF_XDP
for Time Sensitive networking setups. The offloading seems to work wonders
there since the latency is reduced *A LOT* (more than 10x in my case) in
Rx path
Another thing i did consider is that Bjorn is right. Since i only have 1
shared txq i need locking to avoid race conditions.
Once again thanks for reviewing this
/Ilias
Powered by blists - more mailing lists