lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250612153037.59335f8f@kernel.org>
Date: Thu, 12 Jun 2025 15:30:37 -0700
From: Jakub Kicinski <kuba@...nel.org>
To: Dragos Tatulea <dtatulea@...dia.com>
Cc: davem@...emloft.net, netdev@...r.kernel.org, edumazet@...gle.com,
 pabeni@...hat.com, andrew+netdev@...n.ch, horms@...nel.org,
 donald.hunter@...il.com, sdf@...ichev.me, almasrymina@...gle.com,
 dw@...idwei.uk, asml.silence@...il.com, ap420073@...il.com,
 jdamato@...tly.com, michael.chan@...adcom.com
Subject: Re: [RFC net-next 19/22] eth: bnxt: use queue op config validate

On Thu, 12 Jun 2025 15:52:12 +0000 Dragos Tatulea wrote:
> On Thu, Jun 12, 2025 at 07:10:28AM -0700, Jakub Kicinski wrote:
> > On Thu, 12 Jun 2025 11:56:26 +0000 Dragos Tatulea wrote:  
> > > For the hypothetical situation when the user configures a larger buffer
> > > than the ring size * MTU. Should the check happen in validate or should
> > > the max buffer size be dynamic depending on ring size and MTU?  
> > 
> > Hm, why does the ring size come into the calculation?
> 
> There is a relationship between ring size, MTU and how much memory a queue
> would need for a full ring, right? Even if relationship is driver dependent.

I see, yes, I think I did something along those lines in patch 16 here.
But the range of values for bnxt is pretty limited so a lot fewer
corner cases to deal with.

Not sure about the calculation depending on MTU, tho. We're talking
about HW-GRO enabled traffic, they should be tightly packed into the
buffer, right? So MTU of chunks really doesn't matter from the buffer
sizing perspective. If they are not packet using larger buffers is
pointless.

> > I don't think it's a practical configuration, so it should be perfectly
> > fine for the driver to reject it. But in principle if user wants to
> > configure a 128 entry ring with 1MB buffers.. I guess they must have 
> > a lot of DRAM to waste, but other than that I don't see a reason to
> > stop them within the core?
> >  
> Ok, so config can be rejected. How about the driver changing the allowed
> max based on the current ring size and MTU? This would allow larger
> buffers on larger rings and MTUs.

Yes.

> There is another interesting case where the user specifies some large
> buffer size which amounts to roughly the max memory for the current ring
> and MTU configuration. We'd end up with a page_pool with a size of one
> which is not very useful...

Right, we can probably save ourselves from the corner cases by capping
the allowed configuration at the max TSO size so 512kB? Does that help?

> > Documenting sounds good, just wanna make sure I understand the potential
> > ambiguity.  
> Is it clearer now? I was just thinking about how to add support for this
> in mlx5 and stumbled into this grey area.

Yes, thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ