[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20230216132835.GA14032@breakpoint.cc>
Date: Thu, 16 Feb 2023 14:28:35 +0100
From: Florian Westphal <fw@...len.de>
To: Jakub Kicinski <kuba@...nel.org>
Cc: Florian Westphal <fw@...len.de>, davem@...emloft.net,
netdev@...r.kernel.org, edumazet@...gle.com, pabeni@...hat.com,
willemb@...gle.com
Subject: Re: [RFC] net: skbuff: let struct skb_ext live inside the head
Jakub Kicinski <kuba@...nel.org> wrote:
> On Wed, 15 Feb 2023 10:43:32 +0100 Florian Westphal wrote:
> > I think the cleaner solution would be to move the new extension ids
> > into sk_buff itself (at the end, uninitialized data unless used).
> >
> > Those extensions would always reside there and not in the slab object.
>
> Do you mean the entire extension? 8B of metadata + (possibly) 32B
> of the key?
32B is too much if its for something esoteric, but see below.
> > Obviously that only makes sense for extensions where we assume
> > that typical workload will require them, which might be a hard call to
> > make.
>
> I'm guessing that's the reason why Google is okay with putting the key
> in the skb - they know they will use it most of the time. But an
> average RHEL user may appreciate the skb growth for an esoteric protocol
> to a much smaller extent :(
Absolutely, I agree that its a non-starter to place this in sk_buff
itself. TX side is less of a problem here because of superpackets.
For RX I think your simpler napi-recycle patch is a good start.
I feel its better to wait before doing anything further in this
direction (e.g. array-of-cached extensions or whatever) until we've
a better test case/more realistic workload(s).
If we need to look at further allocation avoidances one thing that
could be evaluated would be placing an extension struct into
sk_buff_fclones (unioned with the fclone skb).
Fclone skb is marked busy, extension release clears it again.
Just something to keep in mind for later. Only downside I see is that
we can't release the extension area anymore before the skb gets queued.
Powered by blists - more mailing lists