lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190322001444.182463-1-edumazet@google.com>
Date:   Thu, 21 Mar 2019 17:14:41 -0700
From:   Eric Dumazet <edumazet@...gle.com>
To:     "David S . Miller" <davem@...emloft.net>
Cc:     netdev <netdev@...r.kernel.org>,
        Eric Dumazet <edumazet@...gle.com>,
        Soheil Hassas Yeganeh <soheil@...gle.com>,
        Willem de Bruijn <willemb@...gle.com>,
        Florian Westphal <fw@...len.de>,
        Tom Herbert <tom@...bertland.com>,
        Eric Dumazet <eric.dumazet@...il.com>
Subject: [PATCH v2 net-next 0/3] tcp: add rx/tx cache to reduce lock contention

On hosts with many cpus we can observe a very serious contention
on spinlocks used in mm slab layer.

The following can happen quite often :

1) TX path
  sendmsg() allocates one (fclone) skb on CPU A, sends a clone.
  ACK is received on CPU B, and consumes the skb that was in the retransmit
  queue.

2) RX path
  network driver allocates skb on CPU C
  recvmsg() happens on CPU D, freeing the skb after it has been delivered
  to user space.

In both cases, we are hitting the asymetric alloc/free pattern
for which slab has to drain alien caches. At 8 Mpps per second,
this represents 16 Mpps alloc/free per second and has a huge penalty.

In an interesting experiment, I tried to use a single kmem_cache for all the skbs
(in skb_init() : skbuff_fclone_cache = skbuff_head_cache =
                  kmem_cache_create("skbuff_fclone_cache", sizeof(struct sk_buff_fclones),);
qnd most of the contention disappeared, since cpus could better use
their local slab per-cpu cache.

But we can do actually better, in the following patches.

TX : at ACK time, no longer free the skb but put it back in a tcp socket cache,
     so that next sendmsg() can reuse it immediately.

RX : at recvmsg() time, do not free the skb but put it in a tcp socket cache
   so that it can be freed by the cpu feeding the incoming packets in BH.

This increased the performance of small RPC benchmark by about 10 % on a host
with 112 hyperthreads.

v2 : - Solved a race condition : sk_stream_alloc_skb() to make sure the prior
       clone has been freed.
     - Really test rps_needed in sk_eat_skb() as claimed.
     - Fixed rps_needed use in drivers/net/tun.c

Eric Dumazet (3):
  net: convert rps_needed and rfs_needed to new static branch api
  tcp: add one skb cache for tx
  tcp: add one skb cache for rx

 drivers/net/tun.c          |  2 +-
 include/linux/netdevice.h  |  4 +--
 include/net/sock.h         | 13 ++++++++-
 net/core/dev.c             | 10 +++----
 net/core/net-sysfs.c       |  4 +--
 net/core/sysctl_net_core.c |  8 +++---
 net/ipv4/af_inet.c         |  4 +++
 net/ipv4/tcp.c             | 54 +++++++++++++++++++-------------------
 net/ipv4/tcp_ipv4.c        | 11 ++++++--
 net/ipv6/tcp_ipv6.c        | 12 ++++++---
 10 files changed, 75 insertions(+), 47 deletions(-)

-- 
2.21.0.225.g810b269d1ac-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ