lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8de1d5d4-ec9e-4684-827b-92db59a3a173@gmail.com>
Date: Wed, 7 Feb 2024 15:49:16 +0000
From: Pavel Begunkov <asml.silence@...il.com>
To: Eric Dumazet <edumazet@...gle.com>
Cc: netdev@...r.kernel.org, davem@...emloft.net, dsahern@...nel.org,
 pabeni@...hat.com, kuba@...nel.org
Subject: Re: [PATCH net-next] net: cache for same cpu skb_attempt_defer_free

On 2/7/24 15:26, Eric Dumazet wrote:
> On Wed, Feb 7, 2024 at 3:42 PM Pavel Begunkov <asml.silence@...il.com> wrote:
>>
>> Optimise skb_attempt_defer_free() executed by the CPU the skb was
>> allocated on. Instead of __kfree_skb() -> kmem_cache_free() we can
>> disable softirqs and put the buffer into cpu local caches.
>>
>> Trying it with a TCP CPU bound ping pong benchmark (i.e. netbench), it
>> showed a 1% throughput improvement (392.2 -> 396.4 Krps). Cross checking
>> with profiles, the total CPU share of skb_attempt_defer_free() dropped by
>> 0.6%. Note, I'd expect the win doubled with rx only benchmarks, as the
>> optimisation is for the receive path, but the test spends >55% of CPU
>> doing writes.
>>
>> Signed-off-by: Pavel Begunkov <asml.silence@...il.com>
>> ---
>>   net/core/skbuff.c | 16 +++++++++++++++-
>>   1 file changed, 15 insertions(+), 1 deletion(-)
>>
>> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
>> index edbbef563d4d..5ac3c353c8a4 100644
>> --- a/net/core/skbuff.c
>> +++ b/net/core/skbuff.c
>> @@ -6877,6 +6877,20 @@ void __skb_ext_put(struct skb_ext *ext)
>>   EXPORT_SYMBOL(__skb_ext_put);
>>   #endif /* CONFIG_SKB_EXTENSIONS */
>>
>> +static void kfree_skb_napi_cache(struct sk_buff *skb)
>> +{
>> +       /* if SKB is a clone, don't handle this case */
>> +       if (skb->fclone != SKB_FCLONE_UNAVAILABLE || in_hardirq()) {
> 
> skb_attempt_defer_free() can not run from hard irq, please do not add
> code suggesting otherwise...

I'll add the change, thanks

>> +               __kfree_skb(skb);
>> +               return;
>> +       }
>> +
>> +       local_bh_disable();
>> +       skb_release_all(skb, SKB_DROP_REASON_NOT_SPECIFIED, false);
>> +       napi_skb_cache_put(skb);
>> +       local_bh_enable();
>> +}
>> +
> 
> I had a patch adding local per-cpu caches of ~8 skbs, to batch
> sd->defer_lock acquisitions,
> it seems I forgot to finish it.

I played with some naive batching approaches there before but couldn't
get anything out of it. From my observations,  skb_attempt_defer_free was
rarely getting SKBs targeting the same CPU, but there are probably irq
affinity configurations where it'd make more sense.

Just to note that this patch is targeting cases with perfect affinity, so
it's orthogonal or complimentary to defer batching.

-- 
Pavel Begunkov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ