lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 22 Mar 2019 18:37:39 +0200 From: Tariq Toukan <ttoukan.linux@...il.com> To: Eric Dumazet <edumazet@...gle.com>, "David S . Miller" <davem@...emloft.net> Cc: netdev <netdev@...r.kernel.org>, Eric Dumazet <eric.dumazet@...il.com> Subject: Re: [PATCH v3 net-next 0/3] tcp: add rx/tx cache to reduce lock contention On 3/22/2019 5:56 PM, Eric Dumazet wrote: > On hosts with many cpus we can observe a very serious contention > on spinlocks used in mm slab layer. > > The following can happen quite often : > > 1) TX path > sendmsg() allocates one (fclone) skb on CPU A, sends a clone. > ACK is received on CPU B, and consumes the skb that was in the retransmit > queue. > > 2) RX path > network driver allocates skb on CPU C > recvmsg() happens on CPU D, freeing the skb after it has been delivered > to user space. > > In both cases, we are hitting the asymetric alloc/free pattern > for which slab has to drain alien caches. At 8 Mpps per second, > this represents 16 Mpps alloc/free per second and has a huge penalty. > > In an interesting experiment, I tried to use a single kmem_cache for all the skbs > (in skb_init() : skbuff_fclone_cache = skbuff_head_cache = > kmem_cache_create("skbuff_fclone_cache", sizeof(struct sk_buff_fclones),); > qnd most of the contention disappeared, since cpus could better use > their local slab per-cpu cache. > > But we can do actually better, in the following patches. > > TX : at ACK time, no longer free the skb but put it back in a tcp socket cache, > so that next sendmsg() can reuse it immediately. > > RX : at recvmsg() time, do not free the skb but put it in a tcp socket cache > so that it can be freed by the cpu feeding the incoming packets in BH. > > This increased the performance of small RPC benchmark by about 10 % on a host > with 112 hyperthreads. > Hi Eric, Does this have any effect on non tcp traffic?
Powered by blists - more mailing lists