lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 4 Jul 2019 12:39:02 +0300
From:   Ilias Apalodimas <ilias.apalodimas@...aro.org>
To:     Jesper Dangaard Brouer <brouer@...hat.com>
Cc:     Ivan Khoronzhuk <ivan.khoronzhuk@...aro.org>,
        grygorii.strashko@...com, hawk@...nel.org, davem@...emloft.net,
        ast@...nel.org, linux-kernel@...r.kernel.org,
        linux-omap@...r.kernel.org, xdp-newbies@...r.kernel.org,
        netdev@...r.kernel.org, daniel@...earbox.net,
        jakub.kicinski@...ronome.com, john.fastabend@...il.com
Subject: Re: [PATCH v6 net-next 5/5] net: ethernet: ti: cpsw: add XDP support

On Thu, Jul 04, 2019 at 11:19:39AM +0200, Jesper Dangaard Brouer wrote:
> On Wed,  3 Jul 2019 13:19:03 +0300
> Ivan Khoronzhuk <ivan.khoronzhuk@...aro.org> wrote:
> 
> > Add XDP support based on rx page_pool allocator, one frame per page.
> > Page pool allocator is used with assumption that only one rx_handler
> > is running simultaneously. DMA map/unmap is reused from page pool
> > despite there is no need to map whole page.
> > 
> > Due to specific of cpsw, the same TX/RX handler can be used by 2
> > network devices, so special fields in buffer are added to identify
> > an interface the frame is destined to. Thus XDP works for both
> > interfaces, that allows to test xdp redirect between two interfaces
> > easily. Aslo, each rx queue have own page pools, but common for both
> > netdevs.
> > 
> > XDP prog is common for all channels till appropriate changes are added
> > in XDP infrastructure. Also, once page_pool recycling becomes part of
> > skb netstack some simplifications can be added, like removing
> > page_pool_release_page() before skb receive.
> > 
> > In order to keep rx_dev while redirect, that can be somehow used in
> > future, do flush in rx_handler, that allows to keep rx dev the same
> > while reidrect. It allows to conform with tracing rx_dev pointed
> > by Jesper.
> 
> So, you simply call xdp_do_flush_map() after each xdp_do_redirect().
> It will kill RX-bulk and performance, but I guess it will work.
> 
> I guess, we can optimized it later, by e.g. in function calling
> cpsw_run_xdp() have a variable that detect if net_device changed
> (priv->ndev) and then call xdp_do_flush_map() when needed.
I tried something similar on the netsec driver on my initial development. 
On the 1gbit speed NICs i saw no difference between flushing per packet vs
flushing on the end of the NAPI handler. 
The latter is obviously better but since the performance impact is negligible on
this particular NIC, i don't think this should be a blocker. 
Please add a clear comment on this and why you do that on this driver,
so people won't go ahead and copy/paste this approach 


Thanks
/Ilias
> 
> 
> > Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@...aro.org>
> > ---
> >  drivers/net/ethernet/ti/Kconfig        |   1 +
> >  drivers/net/ethernet/ti/cpsw.c         | 485 ++++++++++++++++++++++---
> >  drivers/net/ethernet/ti/cpsw_ethtool.c |  66 +++-
> >  drivers/net/ethernet/ti/cpsw_priv.h    |   7 +
> >  4 files changed, 502 insertions(+), 57 deletions(-)
> > 
> [...]
> > +static int cpsw_run_xdp(struct cpsw_priv *priv, int ch, struct xdp_buff *xdp,
> > +			struct page *page)
> > +{
> > +	struct cpsw_common *cpsw = priv->cpsw;
> > +	struct net_device *ndev = priv->ndev;
> > +	int ret = CPSW_XDP_CONSUMED;
> > +	struct xdp_frame *xdpf;
> > +	struct bpf_prog *prog;
> > +	u32 act;
> > +
> > +	rcu_read_lock();
> > +
> > +	prog = READ_ONCE(priv->xdp_prog);
> > +	if (!prog) {
> > +		ret = CPSW_XDP_PASS;
> > +		goto out;
> > +	}
> > +
> > +	act = bpf_prog_run_xdp(prog, xdp);
> > +	switch (act) {
> > +	case XDP_PASS:
> > +		ret = CPSW_XDP_PASS;
> > +		break;
> > +	case XDP_TX:
> > +		xdpf = convert_to_xdp_frame(xdp);
> > +		if (unlikely(!xdpf))
> > +			goto drop;
> > +
> > +		cpsw_xdp_tx_frame(priv, xdpf, page);
> > +		break;
> > +	case XDP_REDIRECT:
> > +		if (xdp_do_redirect(ndev, xdp, prog))
> > +			goto drop;
> > +
> > +		/* as flush requires rx_dev to be per NAPI handle and there
> > +		 * is can be two devices putting packets on bulk queue,
> > +		 * do flush here avoid this just for sure.
> > +		 */
> > +		xdp_do_flush_map();
> 
> > +		break;
> > +	default:
> > +		bpf_warn_invalid_xdp_action(act);
> > +		/* fall through */
> > +	case XDP_ABORTED:
> > +		trace_xdp_exception(ndev, prog, act);
> > +		/* fall through -- handle aborts by dropping packet */
> > +	case XDP_DROP:
> > +		goto drop;
> > +	}
> > +out:
> > +	rcu_read_unlock();
> > +	return ret;
> > +drop:
> > +	rcu_read_unlock();
> > +	page_pool_recycle_direct(cpsw->page_pool[ch], page);
> > +	return ret;
> > +}
> 
> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists