[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190323.215806.1903411852588970839.davem@davemloft.net>
Date: Sat, 23 Mar 2019 21:58:06 -0400 (EDT)
From: David Miller <davem@...emloft.net>
To: edumazet@...gle.com
Cc: netdev@...r.kernel.org, eric.dumazet@...il.com
Subject: Re: [PATCH v3 net-next 0/3] tcp: add rx/tx cache to reduce lock
contention
From: Eric Dumazet <edumazet@...gle.com>
Date: Fri, 22 Mar 2019 08:56:37 -0700
> On hosts with many cpus we can observe a very serious contention
> on spinlocks used in mm slab layer.
>
> The following can happen quite often :
>
> 1) TX path
> sendmsg() allocates one (fclone) skb on CPU A, sends a clone.
> ACK is received on CPU B, and consumes the skb that was in the retransmit
> queue.
>
> 2) RX path
> network driver allocates skb on CPU C
> recvmsg() happens on CPU D, freeing the skb after it has been delivered
> to user space.
>
> In both cases, we are hitting the asymetric alloc/free pattern
> for which slab has to drain alien caches. At 8 Mpps per second,
> this represents 16 Mpps alloc/free per second and has a huge penalty.
>
> In an interesting experiment, I tried to use a single kmem_cache for all the skbs
> (in skb_init() : skbuff_fclone_cache = skbuff_head_cache =
> kmem_cache_create("skbuff_fclone_cache", sizeof(struct sk_buff_fclones),);
> qnd most of the contention disappeared, since cpus could better use
> their local slab per-cpu cache.
>
> But we can do actually better, in the following patches.
>
> TX : at ACK time, no longer free the skb but put it back in a tcp socket cache,
> so that next sendmsg() can reuse it immediately.
>
> RX : at recvmsg() time, do not free the skb but put it in a tcp socket cache
> so that it can be freed by the cpu feeding the incoming packets in BH.
>
> This increased the performance of small RPC benchmark by about 10 % on a host
> with 112 hyperthreads.
...
Sensational.
Series applied, thanks!
Powered by blists - more mailing lists