[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <fyuhlxqjgrypi2gu24kj3x3noulgibd3kdgmek6qmbssfjailj@uf7w3fyb7sqz>
Date: Tue, 17 Jun 2025 12:36:03 +0000
From: Dragos Tatulea <dtatulea@...dia.com>
To: Jakub Kicinski <kuba@...nel.org>
Cc: davem@...emloft.net, netdev@...r.kernel.org, edumazet@...gle.com,
pabeni@...hat.com, andrew+netdev@...n.ch, horms@...nel.org, donald.hunter@...il.com,
sdf@...ichev.me, almasrymina@...gle.com, dw@...idwei.uk, asml.silence@...il.com,
ap420073@...il.com, jdamato@...tly.com, michael.chan@...adcom.com
Subject: Re: [RFC net-next 19/22] eth: bnxt: use queue op config validate
On Fri, Jun 13, 2025 at 04:16:36PM -0700, Jakub Kicinski wrote:
> On Fri, 13 Jun 2025 19:02:53 +0000 Dragos Tatulea wrote:
> > > > There is a relationship between ring size, MTU and how much memory a queue
> > > > would need for a full ring, right? Even if relationship is driver dependent.
> > >
> > > I see, yes, I think I did something along those lines in patch 16 here.
> > > But the range of values for bnxt is pretty limited so a lot fewer
> > > corner cases to deal with.
> >
> > Indeed.
> >
> > > Not sure about the calculation depending on MTU, tho. We're talking
> > > about HW-GRO enabled traffic, they should be tightly packed into the
> > > buffer, right? So MTU of chunks really doesn't matter from the buffer
> > > sizing perspective. If they are not packet using larger buffers is
> > > pointless.
> > >
> > But it matters from the perspective of total memory allocatable by the
> > queue (aka page pool size), right? A 1K ring size with 1500 MTU would
> > need less total memory than for a 1K queue x 9000 MTU to cover the full
> > queue.
>
> True but that's only relevant to the "normal" buffers?
> IIUC for bnxt and fbnic the ring size for rx-jumbo-pending
> (which is where payloads go) is always in 4k buffer units.
> Whether the MTU is 1k or 9k we'd GRO the packets together
> into the 4k buffers. So I don't see why the MTU matters
> for the amount of memory held on the aggregation ring.
>
I see what you mean. mlx5 sizes the memory requirements according to
ring size and MTU. That's where my misunderstanding came from.
> > Side note: We already have the disconnect between how much the driver
> > *thinks* it needs (based on ring size, MTU and other stuff) and how much
> > memory is given by a memory provider from the application side.
>
> True, tho, I think ideally the drivers would accept starting
> with a ring that's not completely filled. I think that's better
> user experience.
Agreed.
Thanks,
Dragos
Powered by blists - more mailing lists