lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Fri, 1 Mar 2024 16:02:26 +0100
From: Richard Gobert <richardbgobert@...il.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: davem@...emloft.net, kuba@...nel.org, pabeni@...hat.com,
 dsahern@...nel.org, shuah@...nel.org, liujian56@...wei.com,
 horms@...nel.org, aleksander.lobakin@...el.com, linyunsheng@...wei.com,
 therbert@...gle.com, netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
 linux-kselftest@...r.kernel.org
Subject: Re: [PATCH net-next 1/3] net: gro: set {inner_,}network_header in
 receive phase



Eric Dumazet wrote:
> On Thu, Feb 29, 2024 at 2:22 PM Richard Gobert <richardbgobert@...il.com> wrote:
>>
>>
>>
>> Eric Dumazet wrote:
>>>
>>> My intuition is that this patch has a high cost for normal GRO processing.
>>> SW-GRO is already a bottleneck on ARM cores in smart NICS.
>>>
>>> I would suggest instead using parameters to give both the nhoff and thoff values
>>> this would avoid many conditionals in the fast path.
>>>
>>> ->
>>>
>>> INDIRECT_CALLABLE_SCOPE int udp6_gro_complete(struct sk_buff *skb, int
>>> nhoff, int thoff)
>>> {
>>>  const struct ipv6hdr *ipv6h = (const struct ipv6hdr *)(skb->data + nhoff);
>>>  struct udphdr *uh = (struct udphdr *)(skb->data + thoff);
>>> ...
>>> }
>>>
>>> INDIRECT_CALLABLE_SCOPE int tcp6_gro_complete(struct sk_buff *skb, int
>>> nhoff, int thoff)
>>> {
>>>        const struct ipv6hdr *iph =  (const struct ipv6hdr *)(skb->data + nhoff);
>>>        struct tcphdr *th = (struct tcphdr *)(skb->data + thoff);
>>>
>>> Why storing in skb fields things that really could be propagated more
>>> efficiently as function parameters ?
>>
>> Hi Eric,
>> Thanks for the review!
>>
>> I agree, the conditionals could be a problem and are actually not needed.
>> The third commit in this patch series introduces an optimisation for
>> ipv6/ipv4 using the correct {inner_}network_header. We can remove the
>> conditionals; I thought about multiple ways to do so. First, remove the
>> conditional in skb_gro_network_offset:
>>
>>     static inline int skb_gro_network_offset(const struct sk_buff *skb)
>>     {
>>         const u32 mask = NAPI_GRO_CB(skb)->encap_mark - 1;
>>         return (skb_network_offset(skb) & mask) | (skb_inner_network_offset(skb) & ~mask);
>>     }
> 
> I was trying to say that we do not need all these helpers, storing
> state in NAPI_GRO_CB(skb),
> dirtying cache lines...
> 
> Ideally, the skb network/transport/... headers could be set at the
> last stage, in gro_complete(big_gro_skb),
> instead of doing this for each segment.
> 
> All the gro_receive() could be much faster by using additional
> parameters (nhoff, thoff)
> 
> skb_gro_offset() could be replaced by the current offset (nhoff or
> other name), passed as a parameter.
> 
> Here is a WIP for gro_complete() step, this looks large but this is
> only adding a 2nd 'offset' parameter
> 
> Prior offset (typically network offset), called p_off
> Old argument nhoff, (renamed thoff if that makes sense), pointing to
> the current offset.
> 

You're right, it seemed to me like a broad change but it is mainly
cosmetic. I'll finish your version and submit it to fix the bug.

I still believe that setting inner_network_header is a valuable change.
For example, although skb_gro_network_offset is used - setting it in
encapsulation protocol functions (such as ipip_gro_receive) allow us to
remove conditionals from {ipv6,inet}_gro_receive gro_list loop and remove
flush_id from napi_gro_cb as written in the 3rd commit.
What are your thoughts about it as a separate patch?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ