[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200110191954.GA72950@apalos.home>
Date: Fri, 10 Jan 2020 21:19:54 +0200
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
Lorenzo Bianconi <lorenzo@...nel.org>, netdev@...r.kernel.org,
davem@...emloft.net
Subject: Re: [PATCH v2 net-next] net: socionext: get rid of huge dma sync in
netsec_alloc_rx_data
On Fri, Jan 10, 2020 at 08:01:56PM +0100, Jesper Dangaard Brouer wrote:
> On Fri, 10 Jan 2020 19:19:40 +0100
> Lorenzo Bianconi <lorenzo.bianconi@...hat.com> wrote:
>
> > > On Fri, 10 Jan 2020 16:34:13 +0100
> > > Lorenzo Bianconi <lorenzo@...nel.org> wrote:
> > >
> > > > > On Fri, Jan 10, 2020 at 02:57:44PM +0100, Lorenzo Bianconi wrote:
> > > > > > Socionext driver can run on dma coherent and non-coherent devices.
> > > > > > Get rid of huge dma_sync_single_for_device in netsec_alloc_rx_data since
> > > > > > now the driver can let page_pool API to managed needed DMA sync
> > > > > >
> > > > > > Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
> > > > > > ---
> > > > > > Changes since v1:
> > > > > > - rely on original frame size for dma sync
> > > > > > ---
> > > > > > drivers/net/ethernet/socionext/netsec.c | 43 +++++++++++++++----------
> > > > > > 1 file changed, 26 insertions(+), 17 deletions(-)
> > > > > >
> > > >
> > > > [...]
> > > >
> > > > > > @@ -883,6 +881,8 @@ static u32 netsec_xdp_xmit_back(struct netsec_priv *priv, struct xdp_buff *xdp)
> > > > > > static u32 netsec_run_xdp(struct netsec_priv *priv, struct bpf_prog *prog,
> > > > > > struct xdp_buff *xdp)
> > > > > > {
> > > > > > + struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX];
> > > > > > + unsigned int len = xdp->data_end - xdp->data;
> > > > >
> > > > > We need to account for XDP expanding the headers as well here.
> > > > > So something like max(xdp->data_end(before bpf), xdp->data_end(after bpf)) -
> > > > > xdp->data (original)
> > > >
> > > > correct, the corner case that is not covered at the moment is when data_end is
> > > > moved forward by the bpf program. I will fix it in v3. Thx
> > >
> > > Maybe we can simplify do:
> > >
> > > void *data_start = NETSEC_RXBUF_HEADROOM + xdp->data_hard_start;
> > > unsigned int len = xdp->data_end - data_start;
> > >
> >
> > Hi Jesper,
> >
> > please correct me if I am wrong but this seems to me the same as v2.
>
> No, this is v2, where you do:
> len = xdp->data_end - xdp->data;
>
> Maybe you mean v1? where you calc len like:
> len = xdp->data_end - xdp->data_hard_start;
>
>
> > The leftover corner case is if xdp->data_end is moved 'forward' by
> > the bpf program (I guess it is possible, right?). In this case we
> > will not sync xdp->data_end(new) - xdp->data_end(old)
>
> Currently xdp->data_end can only shrink (but I plan to extend it). Yes,
> this corner case is left, but I don't think we need to handle it. When
> a BPF prog shrink xdp->data_end, then i believe it cannot change that
> part the shunk part any longer.
>
What about a bpf prog that adds a vlan header for example?
Won't that push extra bytes in the memory the NIC will potentially will write
the next packet, once the memory is recycled?
Regards
/Ilias
>
> >
> > > The cache-lines that need to be flushed/synced for_device is the area
> > > used by NIC DMA engine. We know it will always start at a certain
> > > point (given driver configured hardware to this).
>
>
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Principal Kernel Engineer at Red Hat
> LinkedIn: http://www.linkedin.com/in/brouer
>
Powered by blists - more mailing lists