lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201112190205.633640-1-arjunroy.kdev@gmail.com>
Date:   Thu, 12 Nov 2020 11:01:57 -0800
From:   Arjun Roy <arjunroy.kdev@...il.com>
To:     davem@...emloft.net, netdev@...r.kernel.org
Cc:     arjunroy@...gle.com, edumazet@...gle.com, soheil@...gle.com
Subject: [net-next 0/8] Perf. optimizations for TCP Recv. Zerocopy

From: Arjun Roy <arjunroy@...gle.com>

This patchset contains several optimizations for TCP Recv. Zerocopy.

Summarized:
1. It is possible that a read payload is not exactly page aligned -
that there may exist "straggler" bytes that we cannot map into the
caller's address space cleanly. For this, we allow the caller to
provide as argument a "hybrid copy buffer", turning
getsockopt(TCP_ZEROCOPY_RECEIVE) into a "hybrid" operation that allows
the caller to avoid a subsequent recvmsg() call to read the
stragglers.

2. Similarly, for "small" read payloads that are either below the size
of a page, or small enough that remapping pages is not a performance
win - we allow the user to short-circuit the remapping operations
entirely and simply copy into the buffer provided.

Some of the patches in the middle of this set are refactors to support
this "short-circuiting" optimization.

3. We allow the user to provide a hint that performing a page zap
operation (and the accompanying TLB shootdown) may not be necessary,
for the provided region that the kernel will attempt to map pages
into. This allows us to avoid this expensive operation while holding
the socket lock, which provides a significant performance advantage.

With all of these changes combined, "medium" sized receive traffic
(multiple tens to few hundreds of KB) see significant efficiency gains
when using TCP receive zerocopy instead of regular recvmsg(). For
example, with RPC-style traffic with 32KB messages, there is a roughly
15% efficiency improvement when using zerocopy. Without these changes,
there is a roughly 60-70% efficiency reduction with such messages when
employing zerocopy.

Arjun Roy (8):
  tcp: Copy straggler unaligned data for TCP Rx. zerocopy.
  tcp: Introduce tcp_recvmsg_locked().
  tcp: Refactor skb frag fast-forward op for recv zerocopy.
  tcp: Refactor frag-is-remappable test for recv zerocopy.
  tcp: Fast return if inq < PAGE_SIZE for recv zerocopy.
  tcp: Introduce short-circuit small reads for recv zerocopy.
  tcp: Set zerocopy hint when data is copied.
  tcp: Defer vm zap unless actually needed for recv zerocopy.

 include/uapi/linux/tcp.h |   4 +
 net/ipv4/tcp.c           | 437 +++++++++++++++++++++++++++++----------
 2 files changed, 334 insertions(+), 107 deletions(-)

-- 
2.29.2.222.g5d2a92d10f8-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ