lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aAfANdg0sfIXdVMx@mini-arch>
Date: Tue, 22 Apr 2025 09:13:41 -0700
From: Stanislav Fomichev <stfomichev@...il.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: davem@...emloft.net, netdev@...r.kernel.org, edumazet@...gle.com,
	pabeni@...hat.com, andrew+netdev@...n.ch, horms@...nel.org,
	donald.hunter@...il.com, sdf@...ichev.me, almasrymina@...gle.com,
	dw@...idwei.uk, asml.silence@...il.com, ap420073@...il.com,
	jdamato@...tly.com, dtatulea@...dia.com, michael.chan@...adcom.com
Subject: Re: [RFC net-next 16/22] eth: bnxt: adjust the fill level of agg
 queues with larger buffers

On 04/21, Jakub Kicinski wrote:
> The driver tries to provision more agg buffers than header buffers
> since multiple agg segments can reuse the same header. The calculation
> / heuristic tries to provide enough pages for 65k of data for each header
> (or 4 frags per header if the result is too big). This calculation is
> currently global to the adapter. If we increase the buffer sizes 8x
> we don't want 8x the amount of memory sitting on the rings.
> Luckily we don't have to fill the rings completely, adjust
> the fill level dynamically in case particular queue has buffers
> larger than the global size.
> 
> Signed-off-by: Jakub Kicinski <kuba@...nel.org>
> ---
>  drivers/net/ethernet/broadcom/bnxt/bnxt.c | 23 ++++++++++++++++++++---
>  1 file changed, 20 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> index 28f8a4e0d41b..43497b335329 100644
> --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
> @@ -3791,6 +3791,21 @@ static void bnxt_free_rx_rings(struct bnxt *bp)
>  	}
>  }
>  
> +static int bnxt_rx_agg_ring_fill_level(struct bnxt *bp,
> +				       struct bnxt_rx_ring_info *rxr)
> +{
> +	/* User may have chosen larger than default rx_page_size,
> +	 * we keep the ring sizes uniform and also want uniform amount
> +	 * of bytes consumed per ring, so cap how much of the rings we fill.
> +	 */
> +	int fill_level = bp->rx_agg_ring_size;

> +
> +	if (rxr->rx_page_size > bp->rx_page_size)
> +		fill_level /= rxr->rx_page_size / bp->rx_page_size;
> +
> +	return fill_level;
> +}

I don't know the driver well enough (so take it as an optional nit),
but seems like there should be a per-rxr rx_agg_ring_size pre-configured
somewhere. Deriving the fill_level from the per-queue rx_page_size feels
a bit hacky.

(or, rather, reading this snippet makes it hard for me to understand
what would be the size of agg ring given all the code in bnxt_set_ring_params)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ