[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20250212102936.23617f03@kernel.org>
Date: Wed, 12 Feb 2025 10:29:36 -0800
From: Jakub Kicinski <kuba@...nel.org>
To: Alexander Lobakin <aleksander.lobakin@...el.com>
Cc: Eric Dumazet <edumazet@...gle.com>, Andrew Lunn <andrew+netdev@...n.ch>,
"David S. Miller" <davem@...emloft.net>, Paolo Abeni <pabeni@...hat.com>,
Lorenzo Bianconi <lorenzo@...nel.org>, Daniel Xu <dxu@...uu.xyz>, "Alexei
Starovoitov" <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
"Andrii Nakryiko" <andrii@...nel.org>, John Fastabend
<john.fastabend@...il.com>, Toke Høiland-Jørgensen
<toke@...nel.org>, "Jesper Dangaard Brouer" <hawk@...nel.org>, Martin KaFai
Lau <martin.lau@...ux.dev>, <netdev@...r.kernel.org>,
<bpf@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH net-next v4 0/8] bpf: cpumap: enable GRO for XDP_PASS
frames
On Wed, 12 Feb 2025 16:55:52 +0100 Alexander Lobakin wrote:
> > You mean to cache napi_id in gro_node?
> >
> > Then we get +8 bytes to sizeof(napi_struct) for little reason...
Right but I think the expectation would be that we don't ever touch
that on the fast path, right? The "real" napi_id would basically
go down below:
/* control-path-only fields follow */
8B of cold data doesn't matter at all. But I haven't checked if
we need the napi->napi_id access anywhere hot, do we?
> > Dunno, if you really prefer, I can do it that way.
>
> Alternative to avoid +8 bytes:
>
> struct napi_struct {
> ...
>
> union {
> struct gro_node gro;
> struct {
> u8 pad[offsetof(struct gro_node, napi_id)];
> u32 napi_id;
> };
> };
>
> This is effectively the same what struct_group() does, just more ugly.
> But allows to declare gro_node separately.
Powered by blists - more mailing lists