[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+sq2CfoY1aRC2BernvqaMGgTgCByM+yq19-Vak0KJqxEU-5Eg@mail.gmail.com>
Date: Fri, 8 May 2020 13:10:00 +0530
From: Sunil Kovvuri <sunil.kovvuri@...il.com>
To: Kevin Hao <haokexin@...il.com>
Cc: Linux Netdev List <netdev@...r.kernel.org>,
Sunil Goutham <sgoutham@...vell.com>,
Geetha sowjanya <gakula@...vell.com>,
Subbaraya Sundeep <sbhatta@...vell.com>,
hariprasad <hkelam@...vell.com>,
"David S. Miller" <davem@...emloft.net>
Subject: Re: [PATCH] octeontx2-pf: Use the napi_alloc_frag() to alloc the pool buffers
On Fri, May 8, 2020 at 9:43 AM Kevin Hao <haokexin@...il.com> wrote:
>
> In the current codes, the octeontx2 uses its own method to allocate
> the pool buffers, but there are some issues in this implementation.
> 1. We have to run the otx2_get_page() for each allocation cycle and
> this is pretty error prone. As I can see there is no invocation
> of the otx2_get_page() in otx2_pool_refill_task(), this will leave
> the allocated pages have the wrong refcount and may be freed wrongly.
> 2. It wastes memory. For example, if we only receive one packet in a
> NAPI RX cycle, and then allocate a 2K buffer with otx2_alloc_rbuf()
> to refill the pool buffers and leave the remain area of the allocated
> page wasted. On a kernel with 64K page, 62K area is wasted.
>
> IMHO it is really unnecessary to implement our own method for the
> buffers allocate, we can reuse the napi_alloc_frag() to simplify
> our code.
>
> Signed-off-by: Kevin Hao <haokexin@...il.com>
> ---
> .../marvell/octeontx2/nic/otx2_common.c | 51 ++++++++-----------
> .../marvell/octeontx2/nic/otx2_common.h | 15 +-----
> .../marvell/octeontx2/nic/otx2_txrx.c | 3 +-
> .../marvell/octeontx2/nic/otx2_txrx.h | 4 --
> 4 files changed, 22 insertions(+), 51 deletions(-)
>
> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> index f1d2dea90a8c..15fa1ad57f88 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
> @@ -379,40 +379,32 @@ void otx2_config_irq_coalescing(struct otx2_nic *pfvf, int qidx)
> (pfvf->hw.cq_ecount_wait - 1));
> }
>
> -dma_addr_t otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool,
> - gfp_t gfp)
> +dma_addr_t _otx2_alloc_rbuf(struct otx2_nic *pfvf, struct otx2_pool *pool)
> {
> dma_addr_t iova;
> + u8 *buf;
>
> - /* Check if request can be accommodated in previous allocated page */
> - if (pool->page && ((pool->page_offset + pool->rbsize) <=
> - (PAGE_SIZE << pool->rbpage_order))) {
> - pool->pageref++;
> - goto ret;
> - }
> -
> - otx2_get_page(pool);
> -
> - /* Allocate a new page */
> - pool->page = alloc_pages(gfp | __GFP_COMP | __GFP_NOWARN,
> - pool->rbpage_order);
> - if (unlikely(!pool->page))
> + buf = napi_alloc_frag(pool->rbsize);
> + if (unlikely(!buf))
> return -ENOMEM;
>
> - pool->page_offset = 0;
> -ret:
> - iova = (u64)otx2_dma_map_page(pfvf, pool->page, pool->page_offset,
> - pool->rbsize, DMA_FROM_DEVICE);
> - if (!iova) {
> - if (!pool->page_offset)
> - __free_pages(pool->page, pool->rbpage_order);
> - pool->page = NULL;
> + iova = dma_map_single(pfvf->dev, buf, pool->rbsize, DMA_FROM_DEVICE);
> + if (unlikely(dma_mapping_error(pfvf->dev, iova)))
> return -ENOMEM;
Use DMA_ATTR_SKIP_CPU_SYNC while mapping the buffer.
Thanks,
Sunil.
Powered by blists - more mailing lists