[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191114181850.GA42770@PC192.168.49.172>
Date: Thu, 14 Nov 2019 20:18:50 +0200
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: Jonathan Lemon <jlemon@...gsvamp.com>
Cc: Lorenzo Bianconi <lorenzo@...nel.org>, netdev@...r.kernel.org,
lorenzo.bianconi@...hat.com, davem@...emloft.net,
thomas.petazzoni@...tlin.com, brouer@...hat.com,
matteo.croce@...hat.com
Subject: Re: [PATCH net-next 3/3] net: mvneta: get rid of huge DMA sync in
mvneta_rx_refill
Hi Jonathan,
On Thu, Nov 14, 2019 at 10:14:19AM -0800, Jonathan Lemon wrote:
> On 10 Nov 2019, at 4:09, Lorenzo Bianconi wrote:
>
> > Get rid of costly dma_sync_single_for_device in mvneta_rx_refill
> > since now the driver can let page_pool API to manage needed DMA
> > sync with a proper size.
> >
> > - XDP_DROP DMA sync managed by mvneta driver: ~420Kpps
> > - XDP_DROP DMA sync managed by page_pool API: ~595Kpps
> >
> > Tested-by: Matteo Croce <mcroce@...hat.com>
> > Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
> > ---
> > drivers/net/ethernet/marvell/mvneta.c | 25 +++++++++++++++----------
> > 1 file changed, 15 insertions(+), 10 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/marvell/mvneta.c
> > b/drivers/net/ethernet/marvell/mvneta.c
> > index ed93eecb7485..591d580c68b4 100644
> > --- a/drivers/net/ethernet/marvell/mvneta.c
> > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > @@ -1846,7 +1846,6 @@ static int mvneta_rx_refill(struct mvneta_port
> > *pp,
> > struct mvneta_rx_queue *rxq,
> > gfp_t gfp_mask)
> > {
> > - enum dma_data_direction dma_dir;
> > dma_addr_t phys_addr;
> > struct page *page;
> >
> > @@ -1856,9 +1855,6 @@ static int mvneta_rx_refill(struct mvneta_port
> > *pp,
> > return -ENOMEM;
> >
> > phys_addr = page_pool_get_dma_addr(page) + pp->rx_offset_correction;
> > - dma_dir = page_pool_get_dma_dir(rxq->page_pool);
> > - dma_sync_single_for_device(pp->dev->dev.parent, phys_addr,
> > - MVNETA_MAX_RX_BUF_SIZE, dma_dir);
> > mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq);
> >
> > return 0;
> > @@ -2097,8 +2093,10 @@ mvneta_run_xdp(struct mvneta_port *pp, struct
> > mvneta_rx_queue *rxq,
> > err = xdp_do_redirect(pp->dev, xdp, prog);
> > if (err) {
> > ret = MVNETA_XDP_DROPPED;
> > - page_pool_recycle_direct(rxq->page_pool,
> > - virt_to_head_page(xdp->data));
> > + __page_pool_put_page(rxq->page_pool,
> > + virt_to_head_page(xdp->data),
> > + xdp->data_end - xdp->data_hard_start,
> > + true);
>
> I just have a clarifying question. Here, the RX buffer was received and
> then
> dma_sync'd to the CPU. Now, it is going to be recycled for RX again; does
> it
> actually need to be sync'd back to the device?
>
> I'm asking since several of the other network drivers (mellanox, for
> example)
> don't resync the buffer back to the device when recycling it for reuse.
I think that if noone apart from the NIC touches the memory, you don't have any
pending cache writes you have to account for.
So since the buffer is completely under the drivers control, as long as you can
guarantee noone's going to write it, you can hand it back to the device without
syncing (BPF use case where the bpf program changes the packet for example
breaks this rule)
Thanks
/Ilias
> --
> Jonathan
Powered by blists - more mailing lists