[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <IA3PR11MB8986D09A5D53F732819D68B5E5C5A@IA3PR11MB8986.namprd11.prod.outlook.com>
Date: Wed, 5 Nov 2025 07:26:15 +0000
From: "Loktionov, Aleksandr" <aleksandr.loktionov@...el.com>
To: "Hay, Joshua A" <joshua.a.hay@...el.com>,
"intel-wired-lan@...ts.osuosl.org" <intel-wired-lan@...ts.osuosl.org>
CC: "netdev@...r.kernel.org" <netdev@...r.kernel.org>, "Hay, Joshua A"
<joshua.a.hay@...el.com>, "Lobakin, Aleksander"
<aleksander.lobakin@...el.com>, "Chittim, Madhu" <madhu.chittim@...el.com>
Subject: RE: [Intel-wired-lan] [PATCH iwl-net] idpf: cap maximum Rx buffer
size
> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@...osl.org> On Behalf
> Of Joshua Hay
> Sent: Monday, November 3, 2025 10:21 PM
> To: intel-wired-lan@...ts.osuosl.org
> Cc: netdev@...r.kernel.org; Hay, Joshua A <joshua.a.hay@...el.com>;
> Lobakin, Aleksander <aleksander.lobakin@...el.com>; Chittim, Madhu
> <madhu.chittim@...el.com>
> Subject: [Intel-wired-lan] [PATCH iwl-net] idpf: cap maximum Rx buffer
> size
>
> The HW only supports a maximum Rx buffer size of 16K-128. On systems
> using large pages, the libeth logic can configure the buffer size to
> be larger than this. The upper bound is PAGE_SIZE while the lower
> bound is MTU rounded up to the nearest power of 2. For example, ARM
> systems with a 64K page size and an mtu of 9000 will set the Rx buffer
> size to 16K, which will cause the config Rx queues message to fail.
>
> Initialize the bufq/fill queue buf_len field to the maximum supported
> size. This will trigger the libeth logic to cap the maximum Rx buffer
> size by reducing the upper bound.
>
> Fixes: 74d1412ac8f37 ("idpf: use libeth Rx buffer management for
> payload buffer")
> Signed-off-by: Joshua Hay <joshua.a.hay@...el.com>
> Acked-by: Alexander Lobakin <aleksander.lobakin@...el.com>
> Reviewed-by: Madhu Chittim <madhu.chittim@...el.com>
> ---
> drivers/net/ethernet/intel/idpf/idpf_txrx.c | 8 +++++---
> drivers/net/ethernet/intel/idpf/idpf_txrx.h | 1 +
> 2 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> index 828f7c444d30..dcdd4fef1c7a 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> @@ -695,9 +695,10 @@ static int idpf_rx_buf_alloc_singleq(struct
> idpf_rx_queue *rxq) static int idpf_rx_bufs_init_singleq(struct
> idpf_rx_queue *rxq) {
> struct libeth_fq fq = {
> - .count = rxq->desc_count,
> - .type = LIBETH_FQE_MTU,
> - .nid = idpf_q_vector_to_mem(rxq->q_vector),
> + .count = rxq->desc_count,
> + .type = LIBETH_FQE_MTU,
> + .buf_len = IDPF_RX_MAX_BUF_SZ,
> + .nid = idpf_q_vector_to_mem(rxq->q_vector),
> };
> int ret;
>
> @@ -754,6 +755,7 @@ static int idpf_rx_bufs_init(struct idpf_buf_queue
> *bufq,
> .truesize = bufq->truesize,
> .count = bufq->desc_count,
> .type = type,
> + .buf_len = IDPF_RX_MAX_BUF_SZ,
> .hsplit = idpf_queue_has(HSPLIT_EN, bufq),
> .xdp = idpf_xdp_enabled(bufq->q_vector->vport),
> .nid = idpf_q_vector_to_mem(bufq->q_vector),
> diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> index 75b977094741..a1255099656f 100644
> --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
> @@ -101,6 +101,7 @@ do {
> \
> idx = 0; \
> } while (0)
>
> +#define IDPF_RX_MAX_BUF_SZ (16384 - 128)
> #define IDPF_RX_BUF_STRIDE 32
> #define IDPF_RX_BUF_POST_STRIDE 16
> #define IDPF_LOW_WATERMARK 64
> --
> 2.39.2
Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@...el.com>
Powered by blists - more mailing lists