[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <521bf1f1-4a22-4afc-b101-ac960781b911@davidwei.uk>
Date: Mon, 11 Aug 2025 13:19:44 -0700
From: David Wei <dw@...idwei.uk>
To: Michael Chan <michael.chan@...adcom.com>
Cc: netdev@...r.kernel.org, Pavan Chebbi <pavan.chebbi@...adcom.com>,
Andrew Lunn <andrew+netdev@...n.ch>, "David S. Miller"
<davem@...emloft.net>, Eric Dumazet <edumazet@...gle.com>,
Jakub Kicinski <kuba@...nel.org>, Paolo Abeni <pabeni@...hat.com>
Subject: Re: [PATCH net-next] bnxt: fill data page pool with frags if
PAGE_SIZE > BNXT_RX_PAGE_SIZE
On 2025-08-11 11:08, Michael Chan wrote:
> On Mon, Aug 11, 2025 at 10:43 AM David Wei <dw@...idwei.uk> wrote:
>>
>> The data page pool always fills the HW rx ring with pages. On arm64 with
>> 64K pages, this will waste _at least_ 32K of memory per entry in the rx
>> ring.
>>
>> Fix by fragmenting the pages if PAGE_SIZE > BNXT_RX_PAGE_SIZE. This
>> makes the data page pool the same as the header pool.
>>
>> Tested with iperf3 with a small (64 entries) rx ring to encourage buffer
>> circulation.
>
> This was a regression when adding devmem support. Prior to that,
> __bnxt_alloc_rx_page() would handle this properly. Should we add a
> Fixes tag?
Sounds good, how about this?
Fixes: cd1fafe7da1f ("eth: bnxt: add support rx side device memory TCP")
>
> The patch looks good to me. Thanks.
> Reviewed-by: Michael Chan <michael.chan@...adocm.com>
>
>>
>> Signed-off-by: David Wei <dw@...idwei.uk>
>> ---
>> drivers/net/ethernet/broadcom/bnxt/bnxt.c | 12 +++++++++---
>> 1 file changed, 9 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
>> index 5578ddcb465d..9d7631ce860f 100644
>> --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
>> +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
>> @@ -926,15 +926,21 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping,
>>
>> static netmem_ref __bnxt_alloc_rx_netmem(struct bnxt *bp, dma_addr_t *mapping,
>> struct bnxt_rx_ring_info *rxr,
>> + unsigned int *offset,
>> gfp_t gfp)
>> {
>> netmem_ref netmem;
>>
>> - netmem = page_pool_alloc_netmems(rxr->page_pool, gfp);
>> + if (PAGE_SIZE > BNXT_RX_PAGE_SIZE) {
>> + netmem = page_pool_alloc_frag_netmem(rxr->page_pool, offset, BNXT_RX_PAGE_SIZE, gfp);
>> + } else {
>> + netmem = page_pool_alloc_netmems(rxr->page_pool, gfp);
>> + *offset = 0;
>> + }
>> if (!netmem)
>> return 0;
>>
>> - *mapping = page_pool_get_dma_addr_netmem(netmem);
>> + *mapping = page_pool_get_dma_addr_netmem(netmem) + *offset;
>> return netmem;
>> }
>>
>> @@ -1029,7 +1035,7 @@ static int bnxt_alloc_rx_netmem(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
>> dma_addr_t mapping;
>> netmem_ref netmem;
>>
>> - netmem = __bnxt_alloc_rx_netmem(bp, &mapping, rxr, gfp);
>> + netmem = __bnxt_alloc_rx_netmem(bp, &mapping, rxr, &offset, gfp);
>> if (!netmem)
>> return -ENOMEM;
>>
>> --
>> 2.47.3
>>
Powered by blists - more mailing lists