[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250421222827.283737-17-kuba@kernel.org>
Date: Mon, 21 Apr 2025 15:28:21 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: davem@...emloft.net
Cc: netdev@...r.kernel.org,
edumazet@...gle.com,
pabeni@...hat.com,
andrew+netdev@...n.ch,
horms@...nel.org,
donald.hunter@...il.com,
sdf@...ichev.me,
almasrymina@...gle.com,
dw@...idwei.uk,
asml.silence@...il.com,
ap420073@...il.com,
jdamato@...tly.com,
dtatulea@...dia.com,
michael.chan@...adcom.com,
Jakub Kicinski <kuba@...nel.org>
Subject: [RFC net-next 16/22] eth: bnxt: adjust the fill level of agg queues with larger buffers
The driver tries to provision more agg buffers than header buffers
since multiple agg segments can reuse the same header. The calculation
/ heuristic tries to provide enough pages for 65k of data for each header
(or 4 frags per header if the result is too big). This calculation is
currently global to the adapter. If we increase the buffer sizes 8x
we don't want 8x the amount of memory sitting on the rings.
Luckily we don't have to fill the rings completely, adjust
the fill level dynamically in case particular queue has buffers
larger than the global size.
Signed-off-by: Jakub Kicinski <kuba@...nel.org>
---
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 23 ++++++++++++++++++++---
1 file changed, 20 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 28f8a4e0d41b..43497b335329 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -3791,6 +3791,21 @@ static void bnxt_free_rx_rings(struct bnxt *bp)
}
}
+static int bnxt_rx_agg_ring_fill_level(struct bnxt *bp,
+ struct bnxt_rx_ring_info *rxr)
+{
+ /* User may have chosen larger than default rx_page_size,
+ * we keep the ring sizes uniform and also want uniform amount
+ * of bytes consumed per ring, so cap how much of the rings we fill.
+ */
+ int fill_level = bp->rx_agg_ring_size;
+
+ if (rxr->rx_page_size > bp->rx_page_size)
+ fill_level /= rxr->rx_page_size / bp->rx_page_size;
+
+ return fill_level;
+}
+
static int bnxt_alloc_rx_page_pool(struct bnxt *bp,
struct bnxt_rx_ring_info *rxr,
int numa_node)
@@ -3798,7 +3813,7 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp,
struct page_pool_params pp = { 0 };
struct page_pool *pool;
- pp.pool_size = bp->rx_agg_ring_size;
+ pp.pool_size = bnxt_rx_agg_ring_fill_level(bp, rxr);
if (BNXT_RX_PAGE_MODE(bp))
pp.pool_size += bp->rx_ring_size;
pp.order = get_order(rxr->rx_page_size);
@@ -4366,11 +4381,13 @@ static void bnxt_alloc_one_rx_ring_netmem(struct bnxt *bp,
struct bnxt_rx_ring_info *rxr,
int ring_nr)
{
+ int fill_level, i;
u32 prod;
- int i;
+
+ fill_level = bnxt_rx_agg_ring_fill_level(bp, rxr);
prod = rxr->rx_agg_prod;
- for (i = 0; i < bp->rx_agg_ring_size; i++) {
+ for (i = 0; i < fill_level; i++) {
if (bnxt_alloc_rx_netmem(bp, rxr, prod, GFP_KERNEL)) {
netdev_warn(bp->dev, "init'ed rx ring %d with %d/%d pages only\n",
ring_nr, i, bp->rx_ring_size);
--
2.49.0
Powered by blists - more mailing lists