[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20260107000648.1861994-13-anthony.l.nguyen@intel.com>
Date: Tue, 6 Jan 2026 16:06:44 -0800
From: Tony Nguyen <anthony.l.nguyen@...el.com>
To: davem@...emloft.net,
kuba@...nel.org,
pabeni@...hat.com,
edumazet@...gle.com,
andrew+netdev@...n.ch,
netdev@...r.kernel.org
Cc: Joshua Hay <joshua.a.hay@...el.com>,
anthony.l.nguyen@...el.com,
Alexander Lobakin <aleksander.lobakin@...el.com>,
Madhu Chittim <madhu.chittim@...el.com>,
Jacob Keller <jacob.e.keller@...el.com>,
Aleksandr Loktionov <aleksandr.loktionov@...el.com>,
David Decotigny <ddecotig@...gle.com>
Subject: [PATCH net 12/13] idpf: cap maximum Rx buffer size
From: Joshua Hay <joshua.a.hay@...el.com>
The HW only supports a maximum Rx buffer size of 16K-128. On systems
using large pages, the libeth logic can configure the buffer size to be
larger than this. The upper bound is PAGE_SIZE while the lower bound is
MTU rounded up to the nearest power of 2. For example, ARM systems with
a 64K page size and an mtu of 9000 will set the Rx buffer size to 16K,
which will cause the config Rx queues message to fail.
Initialize the bufq/fill queue buf_len field to the maximum supported
size. This will trigger the libeth logic to cap the maximum Rx buffer
size by reducing the upper bound.
Fixes: 74d1412ac8f37 ("idpf: use libeth Rx buffer management for payload buffer")
Signed-off-by: Joshua Hay <joshua.a.hay@...el.com>
Acked-by: Alexander Lobakin <aleksander.lobakin@...el.com>
Reviewed-by: Madhu Chittim <madhu.chittim@...el.com>
Reviewed-by: Jacob Keller <jacob.e.keller@...el.com>
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@...el.com>
Reviewed-by: David Decotigny <ddecotig@...gle.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@...el.com>
---
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 8 +++++---
drivers/net/ethernet/intel/idpf/idpf_txrx.h | 1 +
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
index f51d52297e1e..7f3933ca9edc 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -695,9 +695,10 @@ static int idpf_rx_buf_alloc_singleq(struct idpf_rx_queue *rxq)
static int idpf_rx_bufs_init_singleq(struct idpf_rx_queue *rxq)
{
struct libeth_fq fq = {
- .count = rxq->desc_count,
- .type = LIBETH_FQE_MTU,
- .nid = idpf_q_vector_to_mem(rxq->q_vector),
+ .count = rxq->desc_count,
+ .type = LIBETH_FQE_MTU,
+ .buf_len = IDPF_RX_MAX_BUF_SZ,
+ .nid = idpf_q_vector_to_mem(rxq->q_vector),
};
int ret;
@@ -754,6 +755,7 @@ static int idpf_rx_bufs_init(struct idpf_buf_queue *bufq,
.truesize = bufq->truesize,
.count = bufq->desc_count,
.type = type,
+ .buf_len = IDPF_RX_MAX_BUF_SZ,
.hsplit = idpf_queue_has(HSPLIT_EN, bufq),
.xdp = idpf_xdp_enabled(bufq->q_vector->vport),
.nid = idpf_q_vector_to_mem(bufq->q_vector),
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
index 0472698ca192..423cc9486dce 100644
--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
@@ -101,6 +101,7 @@ do { \
idx = 0; \
} while (0)
+#define IDPF_RX_MAX_BUF_SZ (16384 - 128)
#define IDPF_RX_BUF_STRIDE 32
#define IDPF_RX_BUF_POST_STRIDE 16
#define IDPF_LOW_WATERMARK 64
--
2.47.1
Powered by blists - more mailing lists