[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9eba198d-c209-4057-85b6-05d0444d1d37@davidwei.uk>
Date: Tue, 1 Apr 2025 08:20:11 -0700
From: David Wei <dw@...idwei.uk>
To: Taehee Yoo <ap420073@...il.com>, davem@...emloft.net, kuba@...nel.org,
pabeni@...hat.com, edumazet@...gle.com, andrew+netdev@...n.ch,
horms@...nel.org, michael.chan@...adcom.com, pavan.chebbi@...adcom.com,
ilias.apalodimas@...aro.org, netdev@...r.kernel.org
Cc: kuniyu@...zon.com, sdf@...ichev.me, aleksander.lobakin@...el.com
Subject: Re: [RFC net-next 1/2] eth: bnxt: refactor buffer descriptor
On 2025-03-31 04:47, Taehee Yoo wrote:
> There are two kinds of buffer descriptors in bnxt, struct
> bnxt_sw_rx_bd and struct bnxt_sw_rx_agg_bd.(+ struct bnxt_tpa_info).
> The bnxt_sw_rx_bd is the bd for ring buffer, the bnxt_sw_rx_agg_bd is
> the bd for the aggregation ring buffer. The purpose of these bd are the
> same, but the structure is a little bit different.
>
> struct bnxt_sw_rx_bd {
> void *data;
> u8 *data_ptr;
> dma_addr_t mapping;
> };
>
> struct bnxt_sw_rx_agg_bd {
> struct page *page;
> unsigned int offset;
> dma_addr_t mapping;
> }
>
> bnxt_sw_rx_bd->data would be either page pointer or page_address(page) +
> offset. Under page mode(xdp is set), data indicates page pointer,
> if not, it indicates virtual address.
> Before the recent head_pool work from Jakub, bnxt_sw_rx_bd->data was
> allocated by kmalloc().
> But after Jakub's work, bnxt_sw_rx_bd->data is allocated by page_pool.
> So, there is no reason to still keep handling virtual address anymore.
> The goal of this patch is to make bnxt_sw_rx_bd the same as
> the bnxt_sw_rx_agg_bd.
> By this change, we can easily use page_pool API like
> page_pool_dma_sync_for_{cpu | device}()
> Also, we can convert from page to the netmem very smoothly by this change.
>
> Tested-by: David Wei <dw@...idwei.uk>
> Signed-off-by: Taehee Yoo <ap420073@...il.com>
> ---
> drivers/net/ethernet/broadcom/bnxt/bnxt.c | 313 +++++++++---------
> drivers/net/ethernet/broadcom/bnxt/bnxt.h | 31 +-
> .../net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 2 +-
> drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 23 +-
> drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h | 2 +-
> include/net/page_pool/types.h | 4 +-
> net/core/page_pool.c | 23 +-
> 7 files changed, 199 insertions(+), 199 deletions(-)
>
> diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> index 934ba9425857..198a42da3015 100644
> --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
...
> @@ -1111,25 +1103,24 @@ static void bnxt_reuse_rx_agg_bufs(struct bnxt_cp_ring_info *cpr, u16 idx,
>
> static struct sk_buff *bnxt_rx_multi_page_skb(struct bnxt *bp,
> struct bnxt_rx_ring_info *rxr,
> - u16 cons, void *data, u8 *data_ptr,
> + u16 cons, struct page *page,
> + unsigned int offset,
> dma_addr_t dma_addr,
> unsigned int offset_and_len)
> {
> unsigned int len = offset_and_len & 0xffff;
> - struct page *page = data;
> u16 prod = rxr->rx_prod;
> struct sk_buff *skb;
> int err;
>
> err = bnxt_alloc_rx_data(bp, rxr, prod, GFP_ATOMIC);
> if (unlikely(err)) {
> - bnxt_reuse_rx_data(rxr, cons, data);
> + bnxt_reuse_rx_data(rxr, cons, page);
> return NULL;
> }
> - dma_addr -= bp->rx_dma_offset;
> - dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, BNXT_RX_PAGE_SIZE,
> - bp->rx_dir);
> - skb = napi_build_skb(data_ptr - bp->rx_offset, BNXT_RX_PAGE_SIZE);
> + page_pool_dma_sync_for_cpu(rxr->page_pool, page, 0, BNXT_RX_PAGE_SIZE);
bnxt_alloc_rx_data() can allocate out of both head_pool and page_pool,
but page_pool_dma_sync_for_cpu() always takes page_pool. The only
difference is pool->p.offset; can this ever be different between the
two?
Powered by blists - more mailing lists