lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <vuv4k5wzq7463di2zgsfxikgordsmygzgns7ay2pt7lpkcnupl@jme7vozdrjaq>
Date: Thu, 12 Jun 2025 15:52:12 +0000
From: Dragos Tatulea <dtatulea@...dia.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: davem@...emloft.net, netdev@...r.kernel.org, edumazet@...gle.com, 
	pabeni@...hat.com, andrew+netdev@...n.ch, horms@...nel.org, donald.hunter@...il.com, 
	sdf@...ichev.me, almasrymina@...gle.com, dw@...idwei.uk, asml.silence@...il.com, 
	ap420073@...il.com, jdamato@...tly.com, michael.chan@...adcom.com
Subject: Re: [RFC net-next 19/22] eth: bnxt: use queue op config validate

On Thu, Jun 12, 2025 at 07:10:28AM -0700, Jakub Kicinski wrote:
> On Thu, 12 Jun 2025 11:56:26 +0000 Dragos Tatulea wrote:
> > For the hypothetical situation when the user configures a larger buffer
> > than the ring size * MTU. Should the check happen in validate or should
> > the max buffer size be dynamic depending on ring size and MTU?
> 
> Hm, why does the ring size come into the calculation?
>
There is a relationship between ring size, MTU and how much memory a queue
would need for a full ring, right? Even if relationship is driver dependent.

> I don't think it's a practical configuration, so it should be perfectly
> fine for the driver to reject it. But in principle if user wants to
> configure a 128 entry ring with 1MB buffers.. I guess they must have 
> a lot of DRAM to waste, but other than that I don't see a reason to
> stop them within the core?
>
Ok, so config can be rejected. How about the driver changing the allowed
max based on the current ring size and MTU? This would allow larger
buffers on larger rings and MTUs.

There is another interesting case where the user specifies some large
buffer size which amounts to roughly the max memory for the current ring
and MTU configuration. We'd end up with a page_pool with a size of one
which is not very useful...

> Documenting sounds good, just wanna make sure I understand the potential
> ambiguity.
Is it clearer now? I was just thinking about how to add support for this
in mlx5 and stumbled into this grey area.

Thanks,
Dragos

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ