lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190322072802-mutt-send-email-mst@kernel.org>
Date:   Fri, 22 Mar 2019 07:28:33 -0400
From:   "Michael S. Tsirkin" <mst@...hat.com>
To:     Eric Dumazet <edumazet@...gle.com>
Cc:     "David S . Miller" <davem@...emloft.net>,
        netdev <netdev@...r.kernel.org>,
        Soheil Hassas Yeganeh <soheil@...gle.com>,
        Willem de Bruijn <willemb@...gle.com>,
        Florian Westphal <fw@...len.de>,
        Tom Herbert <tom@...bertland.com>,
        Eric Dumazet <eric.dumazet@...il.com>
Subject: Re: [PATCH v2 net-next 0/3] tcp: add rx/tx cache to reduce lock
 contention

On Thu, Mar 21, 2019 at 05:14:41PM -0700, Eric Dumazet wrote:
> On hosts with many cpus we can observe a very serious contention
> on spinlocks used in mm slab layer.
> 
> The following can happen quite often :
> 
> 1) TX path
>   sendmsg() allocates one (fclone) skb on CPU A, sends a clone.
>   ACK is received on CPU B, and consumes the skb that was in the retransmit
>   queue.
> 
> 2) RX path
>   network driver allocates skb on CPU C
>   recvmsg() happens on CPU D, freeing the skb after it has been delivered
>   to user space.
> 
> In both cases, we are hitting the asymetric alloc/free pattern
> for which slab has to drain alien caches. At 8 Mpps per second,
> this represents 16 Mpps alloc/free per second and has a huge penalty.
> 
> In an interesting experiment, I tried to use a single kmem_cache for all the skbs
> (in skb_init() : skbuff_fclone_cache = skbuff_head_cache =
>                   kmem_cache_create("skbuff_fclone_cache", sizeof(struct sk_buff_fclones),);
> qnd most of the contention disappeared, since cpus could better use
> their local slab per-cpu cache.
> 
> But we can do actually better, in the following patches.
> 
> TX : at ACK time, no longer free the skb but put it back in a tcp socket cache,
>      so that next sendmsg() can reuse it immediately.
> 
> RX : at recvmsg() time, do not free the skb but put it in a tcp socket cache
>    so that it can be freed by the cpu feeding the incoming packets in BH.
> 
> This increased the performance of small RPC benchmark by about 10 % on a host
> with 112 hyperthreads.
> 
> v2 : - Solved a race condition : sk_stream_alloc_skb() to make sure the prior
>        clone has been freed.
>      - Really test rps_needed in sk_eat_skb() as claimed.
>      - Fixed rps_needed use in drivers/net/tun.c

Just a thought: would it make sense to flush the cache
in enter_memory_pressure?


> Eric Dumazet (3):
>   net: convert rps_needed and rfs_needed to new static branch api
>   tcp: add one skb cache for tx
>   tcp: add one skb cache for rx
> 
>  drivers/net/tun.c          |  2 +-
>  include/linux/netdevice.h  |  4 +--
>  include/net/sock.h         | 13 ++++++++-
>  net/core/dev.c             | 10 +++----
>  net/core/net-sysfs.c       |  4 +--
>  net/core/sysctl_net_core.c |  8 +++---
>  net/ipv4/af_inet.c         |  4 +++
>  net/ipv4/tcp.c             | 54 +++++++++++++++++++-------------------
>  net/ipv4/tcp_ipv4.c        | 11 ++++++--
>  net/ipv6/tcp_ipv6.c        | 12 ++++++---
>  10 files changed, 75 insertions(+), 47 deletions(-)
> 
> -- 
> 2.21.0.225.g810b269d1ac-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ