[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200113103917.GA115887@apalos.home>
Date: Mon, 13 Jan 2020 12:39:17 +0200
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Lorenzo Bianconi <lorenzo@...nel.org>
Cc: Jesper Dangaard Brouer <brouer@...hat.com>,
Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
netdev@...r.kernel.org, davem@...emloft.net
Subject: Re: [PATCH v2 net-next] net: socionext: get rid of huge dma sync in
netsec_alloc_rx_data
On Fri, Jan 10, 2020 at 08:36:51PM +0100, Lorenzo Bianconi wrote:
> > On Fri, 10 Jan 2020 19:19:40 +0100
> > Lorenzo Bianconi <lorenzo.bianconi@...hat.com> wrote:
> >
> > > > On Fri, 10 Jan 2020 16:34:13 +0100
> > > > Lorenzo Bianconi <lorenzo@...nel.org> wrote:
> > > >
> > > > > > On Fri, Jan 10, 2020 at 02:57:44PM +0100, Lorenzo Bianconi wrote:
> > > > > > > Socionext driver can run on dma coherent and non-coherent devices.
> > > > > > > Get rid of huge dma_sync_single_for_device in netsec_alloc_rx_data since
> > > > > > > now the driver can let page_pool API to managed needed DMA sync
> > > > > > >
> > > > > > > Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
> > > > > > > ---
> > > > > > > Changes since v1:
> > > > > > > - rely on original frame size for dma sync
> > > > > > > ---
> > > > > > > drivers/net/ethernet/socionext/netsec.c | 43 +++++++++++++++----------
> > > > > > > 1 file changed, 26 insertions(+), 17 deletions(-)
> > > > > > >
> > > > >
> > > > > [...]
> > > > >
> > > > > > > @@ -883,6 +881,8 @@ static u32 netsec_xdp_xmit_back(struct netsec_priv *priv, struct xdp_buff *xdp)
> > > > > > > static u32 netsec_run_xdp(struct netsec_priv *priv, struct bpf_prog *prog,
> > > > > > > struct xdp_buff *xdp)
> > > > > > > {
> > > > > > > + struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX];
> > > > > > > + unsigned int len = xdp->data_end - xdp->data;
> > > > > >
> > > > > > We need to account for XDP expanding the headers as well here.
> > > > > > So something like max(xdp->data_end(before bpf), xdp->data_end(after bpf)) -
> > > > > > xdp->data (original)
> > > > >
> > > > > correct, the corner case that is not covered at the moment is when data_end is
> > > > > moved forward by the bpf program. I will fix it in v3. Thx
> > > >
> > > > Maybe we can simplify do:
> > > >
> > > > void *data_start = NETSEC_RXBUF_HEADROOM + xdp->data_hard_start;
> > > > unsigned int len = xdp->data_end - data_start;
> > > >
> > >
> > > Hi Jesper,
> > >
> > > please correct me if I am wrong but this seems to me the same as v2.
> >
> > No, this is v2, where you do:
> > len = xdp->data_end - xdp->data;
>
> I mean in the solution you proposed you set (before running the bpf program):
>
> len = xdp->data_end - data_start
> where:
> data_start = NETSEC_RXBUF_HEADROOM + xdp->data_hard_start
>
> that is equivalent to what I did in v2 (before running the bpf program):
> len = xdp->data_end - xdp->data
>
> since:
> xdp->data = xdp->data_hard_start + NETSEC_RXBUF_HEADROOM
> (set in netsec_process_rx())
>
> Am I missing something?
>
> >
> > Maybe you mean v1? where you calc len like:
> > len = xdp->data_end - xdp->data_hard_start;
> >
> >
> > > The leftover corner case is if xdp->data_end is moved 'forward' by
> > > the bpf program (I guess it is possible, right?). In this case we
> > > will not sync xdp->data_end(new) - xdp->data_end(old)
> >
> > Currently xdp->data_end can only shrink (but I plan to extend it). Yes,
> > this corner case is left, but I don't think we need to handle it. When
> > a BPF prog shrink xdp->data_end, then i believe it cannot change that
> > part the shunk part any longer.
> >
Ok, i thought it could expand as well.
If that's the case the current patchset is ok
>
> ack, fine to me.
>
> Regards,
> Lorenzo
>
> >
> > >
> > > > The cache-lines that need to be flushed/synced for_device is the area
> > > > used by NIC DMA engine. We know it will always start at a certain
> > > > point (given driver configured hardware to this).
> >
> >
> > --
> > Best regards,
> > Jesper Dangaard Brouer
> > MSc.CS, Principal Kernel Engineer at Red Hat
> > LinkedIn: http://www.linkedin.com/in/brouer
> >
Reviewed-by: Ilias Apalodimas <ilias.apalodimas@...aro.org>
Powered by blists - more mailing lists