[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200109172956.GB2626@localhost.localdomain>
Date: Thu, 9 Jan 2020 18:29:56 +0100
From: Lorenzo Bianconi <lorenzo.bianconi@...hat.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Lorenzo Bianconi <lorenzo@...nel.org>, netdev@...r.kernel.org,
davem@...emloft.net
Subject: Re: [PATCH] net: socionext: get rid of huge dma sync in
netsec_alloc_rx_data
> On Wed, 8 Jan 2020 16:53:22 +0200
> Ilias Apalodimas <ilias.apalodimas@...aro.org> wrote:
>
> > Hi Lorenzo,
> >
> > On Tue, Jan 07, 2020 at 04:30:32PM +0100, Lorenzo Bianconi wrote:
Hi Jesper and Ilias,
thx for the review :)
> > > Socionext driver can run on dma coherent and non-coherent devices.
> > > Get rid of huge dma_sync_single_for_device in netsec_alloc_rx_data since
> > > now the driver can let page_pool API to managed needed DMA sync
> > >
> > > Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
> > > ---
> > > drivers/net/ethernet/socionext/netsec.c | 45 +++++++++++++++----------
> > > 1 file changed, 28 insertions(+), 17 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
> > > index b5a9e947a4a8..00404fef17e8 100644
> > > --- a/drivers/net/ethernet/socionext/netsec.c
> > > +++ b/drivers/net/ethernet/socionext/netsec.c
>
> [...]
> > > @@ -734,9 +734,7 @@ static void *netsec_alloc_rx_data(struct netsec_priv *priv,
> > > /* Make sure the incoming payload fits in the page for XDP and non-XDP
> > > * cases and reserve enough space for headroom + skb_shared_info
> > > */
> > > - *desc_len = PAGE_SIZE - NETSEC_RX_BUF_NON_DATA;
> > > - dma_dir = page_pool_get_dma_dir(dring->page_pool);
> > > - dma_sync_single_for_device(priv->dev, *dma_handle, *desc_len, dma_dir);
> > > + *desc_len = NETSEC_RX_BUF_SIZE;
> > >
> > > return page_address(page);
> > > }
> > > @@ -883,6 +881,7 @@ static u32 netsec_xdp_xmit_back(struct netsec_priv *priv, struct xdp_buff *xdp)
> > > static u32 netsec_run_xdp(struct netsec_priv *priv, struct bpf_prog *prog,
> > > struct xdp_buff *xdp)
> > > {
> > > + struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX];
> > > u32 ret = NETSEC_XDP_PASS;
> > > int err;
> > > u32 act;
> > > @@ -896,7 +895,10 @@ static u32 netsec_run_xdp(struct netsec_priv *priv, struct bpf_prog *prog,
> > > case XDP_TX:
> > > ret = netsec_xdp_xmit_back(priv, xdp);
> > > if (ret != NETSEC_XDP_TX)
> > > - xdp_return_buff(xdp);
> > > + __page_pool_put_page(dring->page_pool,
> > > + virt_to_head_page(xdp->data),
> > > + xdp->data_end - xdp->data_hard_start,
> >
> > Do we have to include data_hard_start?
>
> That does look wrong.
ack, will fix it in v2
>
> > @Jesper i know bpf programs can modify the packet, but isn't it safe
> > to only sync for xdp->data_end - xdp->data in this case since the DMA transfer
> > in this driver will always start *after* the XDP headroom?
>
> I agree.
>
> For performance it is actually important that we avoid "cache-flushing"
> (which what happens on these non-coherent devices) the headroom. As the
> headroom is used for e.g. storing xdp_frame.
IIRC on mvneta there is the same issue. I will post a patch to fix it.
Regards,
Lorenzo
>
>
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Principal Kernel Engineer at Red Hat
> LinkedIn: http://www.linkedin.com/in/brouer
>
Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)
Powered by blists - more mailing lists