[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CANn89i+tRF0QerD44j=QRx34_n39jNJu+SkDP+owUw2=+4q=8w@mail.gmail.com>
Date: Fri, 5 Dec 2025 07:36:27 -0800
From: Eric Dumazet <edumazet@...gle.com>
To: Paolo Abeni <pabeni@...hat.com>
Cc: netdev@...r.kernel.org, "David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, Simon Horman <horms@...nel.org>, Neal Cardwell <ncardwell@...gle.com>,
Kuniyuki Iwashima <kuniyu@...gle.com>, David Ahern <dsahern@...nel.org>
Subject: Re: [RFC PATCH 1/2] net: gro: avoid relaying on skb->transport_header
at receive time
On Fri, Dec 5, 2025 at 7:23 AM Paolo Abeni <pabeni@...hat.com> wrote:
>
> On 12/5/25 3:37 PM, Eric Dumazet wrote:
> > On Fri, Dec 5, 2025 at 6:04 AM Paolo Abeni <pabeni@...hat.com> wrote:
> >>
> >> Currently {tcp,udp}_gro_receive relay on the gro network stage setting
> >
> > rely :)
> >
> >> the correct transport header offset for all the skbs held by the GRO
> >> engine.
> >>
> >> Such assumption is not necessary, as the code can instead leverage the
> >> offset already available for the currently processed skb. Add a couple
> >> of helpers to for readabilty' sake.
> >>
> >> As skb->transport_header lays on a different cacheline wrt skb->data,
> >> this should save a cacheline access for each packet aggregation.
> >> Additionally this will make the next patch possible.
> >>
> >> Note that the compiler (gcc 15.2.1) does inline the tcp_gro_lookup()
> >> call in tcp_gro_receive(), so the additional argument is only relevant
> >> for the fraglist case.
> >>
> >> Signed-off-by: Paolo Abeni <pabeni@...hat.com>
> >> ---
> >> include/net/gro.h | 26 ++++++++++++++++++++++++++
> >> include/net/tcp.h | 3 ++-
> >> net/ipv4/tcp_offload.c | 15 ++++++++-------
> >> net/ipv4/udp_offload.c | 4 ++--
> >> net/ipv6/tcpv6_offload.c | 2 +-
> >> 5 files changed, 39 insertions(+), 11 deletions(-)
> >>
> >> diff --git a/include/net/gro.h b/include/net/gro.h
> >> index b65f631c521d..fdb9285ab117 100644
> >> --- a/include/net/gro.h
> >> +++ b/include/net/gro.h
> >> @@ -420,6 +420,18 @@ struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb,
> >> struct udphdr *uh, struct sock *sk);
> >> int udp_gro_complete(struct sk_buff *skb, int nhoff, udp_lookup_t lookup);
> >>
> >> +/* Return the skb hdr corresponding to the specified skb2 hdr.
> >> + * skb2 is held in the gro engine, i.e. its headers are in the linear part.
> >> + */
> >> +static inline const void *
> >> +skb_gro_header_from(const struct sk_buff *skb, const struct sk_buff *skb2,
> >> + const void *hdr2)
> >> +{
> >> + size_t offset = (unsigned char *)hdr2 - skb2->data;
> >> +
> >> + return skb->data + offset;
> >> +}
> >
> > I would rather switch gro to pass an @offset instead of a header pointer ?
> >
> > Rebuilding one header pointer from offset is fast : skb->data + offset
> > ( offset : network header, transport header, ...)
>
> I considered such option and opted for the above for a very small
> reason: it produces a little more compact (C) code in the caller.
>
> I'll switch to offset in next revisions.
I am
> > As a matter of fact, some GRO state variables could be onstack, instead
> > of being stored in NAPI_GRO_CB()
> Do you mean the network offsets? In any case, I hope we can keep such
> work separate from this one?
Sure, just some general observation.
BTW the offending memset() can be optimized a bit to not let the
compiler call an external function.
I do not know how to upstream this properly ;)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index a00808f7be6a..7df63dc79cf3 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -424,7 +424,13 @@ struct sk_buff *slab_build_skb(void *data)
if (unlikely(!skb))
return NULL;
- memset(skb, 0, offsetof(struct sk_buff, tail));
+ /* Implement memset(skb, 0, offsetof(struct sk_buff, tail)
+ * so that compiler inlines it ;)
+ */
+ memset(skb, 0, 128);
+ barrier();
+ memset((void *)skb + 128, 0, offsetof(struct sk_buff, tail) - 128);
+
data = __slab_build_skb(data, &size);
__finalize_skb_around(skb, data, size);
Powered by blists - more mailing lists