[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f839323a-4946-422b-a72a-c2efd71b2f42@intel.com>
Date: Mon, 3 Nov 2025 15:01:15 -0800
From: Jacob Keller <jacob.e.keller@...el.com>
To: Joshua Hay <joshua.a.hay@...el.com>, <intel-wired-lan@...ts.osuosl.org>
CC: <netdev@...r.kernel.org>, Alexander Lobakin
<aleksander.lobakin@...el.com>, Madhu Chittim <madhu.chittim@...el.com>
Subject: Re: [Intel-wired-lan] [PATCH iwl-net] idpf: cap maximum Rx buffer
size
On 11/3/2025 1:20 PM, Joshua Hay wrote:
> The HW only supports a maximum Rx buffer size of 16K-128. On systems
> using large pages, the libeth logic can configure the buffer size to be
> larger than this. The upper bound is PAGE_SIZE while the lower bound is
> MTU rounded up to the nearest power of 2. For example, ARM systems with
> a 64K page size and an mtu of 9000 will set the Rx buffer size to 16K,
> which will cause the config Rx queues message to fail.
>
> Initialize the bufq/fill queue buf_len field to the maximum supported
> size. This will trigger the libeth logic to cap the maximum Rx buffer
> size by reducing the upper bound.
>
> Fixes: 74d1412ac8f37 ("idpf: use libeth Rx buffer management for payload buffer")
> Signed-off-by: Joshua Hay <joshua.a.hay@...el.com>
> Acked-by: Alexander Lobakin <aleksander.lobakin@...el.com>
> Reviewed-by: Madhu Chittim <madhu.chittim@...el.com>
> ---
Reviewed-by: Jacob Keller <jacob.e.keller@...el.com>
> drivers/net/ethernet/intel/idpf/idpf_txrx.c | 8 +++++---
> drivers/net/ethernet/intel/idpf/idpf_txrx.h | 1 +
> 2 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> index 828f7c444d30..dcdd4fef1c7a 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> @@ -695,9 +695,10 @@ static int idpf_rx_buf_alloc_singleq(struct idpf_rx_queue *rxq)
> static int idpf_rx_bufs_init_singleq(struct idpf_rx_queue *rxq)
> {
> struct libeth_fq fq = {
> - .count = rxq->desc_count,
> - .type = LIBETH_FQE_MTU,
> - .nid = idpf_q_vector_to_mem(rxq->q_vector),
> + .count = rxq->desc_count,
> + .type = LIBETH_FQE_MTU,
> + .buf_len = IDPF_RX_MAX_BUF_SZ,
> + .nid = idpf_q_vector_to_mem(rxq->q_vector),
> };
> int ret;
>
> @@ -754,6 +755,7 @@ static int idpf_rx_bufs_init(struct idpf_buf_queue *bufq,
> .truesize = bufq->truesize,
> .count = bufq->desc_count,
> .type = type,
> + .buf_len = IDPF_RX_MAX_BUF_SZ,
> .hsplit = idpf_queue_has(HSPLIT_EN, bufq),
> .xdp = idpf_xdp_enabled(bufq->q_vector->vport),
> .nid = idpf_q_vector_to_mem(bufq->q_vector),
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> index 75b977094741..a1255099656f 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> @@ -101,6 +101,7 @@ do { \
> idx = 0; \
> } while (0)
>
> +#define IDPF_RX_MAX_BUF_SZ (16384 - 128)
> #define IDPF_RX_BUF_STRIDE 32
> #define IDPF_RX_BUF_POST_STRIDE 16
> #define IDPF_LOW_WATERMARK 64
Download attachment "OpenPGP_signature.asc" of type "application/pgp-signature" (237 bytes)
Powered by blists - more mailing lists