lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 9 Jan 2020 18:20:38 +0100
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Ilias Apalodimas <ilias.apalodimas@...aro.org>
Cc:     Lorenzo Bianconi <lorenzo@...nel.org>, netdev@...r.kernel.org,
        davem@...emloft.net, lorenzo.bianconi@...hat.com, brouer@...hat.com
Subject: Re: [PATCH] net: socionext: get rid of huge dma sync in
 netsec_alloc_rx_data

On Wed, 8 Jan 2020 16:53:22 +0200
Ilias Apalodimas <ilias.apalodimas@...aro.org> wrote:

> Hi Lorenzo, 
> 
> On Tue, Jan 07, 2020 at 04:30:32PM +0100, Lorenzo Bianconi wrote:
> > Socionext driver can run on dma coherent and non-coherent devices.
> > Get rid of huge dma_sync_single_for_device in netsec_alloc_rx_data since
> > now the driver can let page_pool API to managed needed DMA sync
> > 
> > Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
> > ---
> >  drivers/net/ethernet/socionext/netsec.c | 45 +++++++++++++++----------
> >  1 file changed, 28 insertions(+), 17 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
> > index b5a9e947a4a8..00404fef17e8 100644
> > --- a/drivers/net/ethernet/socionext/netsec.c
> > +++ b/drivers/net/ethernet/socionext/netsec.c

[...]
> > @@ -734,9 +734,7 @@ static void *netsec_alloc_rx_data(struct netsec_priv *priv,
> >  	/* Make sure the incoming payload fits in the page for XDP and non-XDP
> >  	 * cases and reserve enough space for headroom + skb_shared_info
> >  	 */
> > -	*desc_len = PAGE_SIZE - NETSEC_RX_BUF_NON_DATA;
> > -	dma_dir = page_pool_get_dma_dir(dring->page_pool);
> > -	dma_sync_single_for_device(priv->dev, *dma_handle, *desc_len, dma_dir);
> > +	*desc_len = NETSEC_RX_BUF_SIZE;
> >  
> >  	return page_address(page);
> >  }
> > @@ -883,6 +881,7 @@ static u32 netsec_xdp_xmit_back(struct netsec_priv *priv, struct xdp_buff *xdp)
> >  static u32 netsec_run_xdp(struct netsec_priv *priv, struct bpf_prog *prog,
> >  			  struct xdp_buff *xdp)
> >  {
> > +	struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX];
> >  	u32 ret = NETSEC_XDP_PASS;
> >  	int err;
> >  	u32 act;
> > @@ -896,7 +895,10 @@ static u32 netsec_run_xdp(struct netsec_priv *priv, struct bpf_prog *prog,
> >  	case XDP_TX:
> >  		ret = netsec_xdp_xmit_back(priv, xdp);
> >  		if (ret != NETSEC_XDP_TX)
> > -			xdp_return_buff(xdp);
> > +			__page_pool_put_page(dring->page_pool,
> > +				     virt_to_head_page(xdp->data),
> > +				     xdp->data_end - xdp->data_hard_start,  
> 
> Do we have to include data_hard_start?

That does look wrong.

> @Jesper i know bpf programs can modify the packet, but isn't it safe
> to only sync for xdp->data_end - xdp->data in this case since the DMA transfer
> in this driver will always start *after* the XDP headroom?

I agree.

For performance it is actually important that we avoid "cache-flushing"
(which what happens on these non-coherent devices) the headroom.  As the
headroom is used for e.g. storing xdp_frame.


-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ