[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bb866d37-6e89-460f-a411-e9f26b0fa4e4@redhat.com>
Date: Fri, 5 Dec 2025 16:22:55 +0100
From: Paolo Abeni <pabeni@...hat.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: netdev@...r.kernel.org, "David S. Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, Simon Horman <horms@...nel.org>,
Neal Cardwell <ncardwell@...gle.com>, Kuniyuki Iwashima <kuniyu@...gle.com>,
David Ahern <dsahern@...nel.org>
Subject: Re: [RFC PATCH 1/2] net: gro: avoid relaying on skb->transport_header
at receive time
On 12/5/25 3:37 PM, Eric Dumazet wrote:
> On Fri, Dec 5, 2025 at 6:04 AM Paolo Abeni <pabeni@...hat.com> wrote:
>>
>> Currently {tcp,udp}_gro_receive relay on the gro network stage setting
>
> rely :)
>
>> the correct transport header offset for all the skbs held by the GRO
>> engine.
>>
>> Such assumption is not necessary, as the code can instead leverage the
>> offset already available for the currently processed skb. Add a couple
>> of helpers to for readabilty' sake.
>>
>> As skb->transport_header lays on a different cacheline wrt skb->data,
>> this should save a cacheline access for each packet aggregation.
>> Additionally this will make the next patch possible.
>>
>> Note that the compiler (gcc 15.2.1) does inline the tcp_gro_lookup()
>> call in tcp_gro_receive(), so the additional argument is only relevant
>> for the fraglist case.
>>
>> Signed-off-by: Paolo Abeni <pabeni@...hat.com>
>> ---
>> include/net/gro.h | 26 ++++++++++++++++++++++++++
>> include/net/tcp.h | 3 ++-
>> net/ipv4/tcp_offload.c | 15 ++++++++-------
>> net/ipv4/udp_offload.c | 4 ++--
>> net/ipv6/tcpv6_offload.c | 2 +-
>> 5 files changed, 39 insertions(+), 11 deletions(-)
>>
>> diff --git a/include/net/gro.h b/include/net/gro.h
>> index b65f631c521d..fdb9285ab117 100644
>> --- a/include/net/gro.h
>> +++ b/include/net/gro.h
>> @@ -420,6 +420,18 @@ struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb,
>> struct udphdr *uh, struct sock *sk);
>> int udp_gro_complete(struct sk_buff *skb, int nhoff, udp_lookup_t lookup);
>>
>> +/* Return the skb hdr corresponding to the specified skb2 hdr.
>> + * skb2 is held in the gro engine, i.e. its headers are in the linear part.
>> + */
>> +static inline const void *
>> +skb_gro_header_from(const struct sk_buff *skb, const struct sk_buff *skb2,
>> + const void *hdr2)
>> +{
>> + size_t offset = (unsigned char *)hdr2 - skb2->data;
>> +
>> + return skb->data + offset;
>> +}
>
> I would rather switch gro to pass an @offset instead of a header pointer ?
>
> Rebuilding one header pointer from offset is fast : skb->data + offset
> ( offset : network header, transport header, ...)
I considered such option and opted for the above for a very small
reason: it produces a little more compact (C) code in the caller.
I'll switch to offset in next revisions.
> As a matter of fact, some GRO state variables could be onstack, instead
> of being stored in NAPI_GRO_CB()
Do you mean the network offsets? In any case, I hope we can keep such
work separate from this one?
> This would avoid some stalls because skb->cb[] has been cleared with
> memset() with long words, while GRO is using smaller fields.Whoops, I never considered store forwarding induced stalls. Something to
ponder about for me.
Many thanks!
Paolo
Powered by blists - more mailing lists