lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20201202220945.911116-1-arjunroy.kdev@gmail.com>
Date:   Wed,  2 Dec 2020 14:09:37 -0800
From:   Arjun Roy <arjunroy.kdev@...il.com>
To:     davem@...emloft.net, netdev@...r.kernel.org
Cc:     arjunroy@...gle.com, edumazet@...gle.com, soheil@...gle.com
Subject: [net-next v2 0/8] Perf. optimizations for TCP Recv. Zerocopy

From: Arjun Roy <arjunroy@...gle.com>

This patchset contains several optimizations for TCP Recv. Zerocopy.

Note this is v2 of the patchset, fixing two 32-bit compilation errors
and a stylistic error.

Summarized:
1. It is possible that a read payload is not exactly page aligned -
that there may exist "straggler" bytes that we cannot map into the
caller's address space cleanly. For this, we allow the caller to
provide as argument a "hybrid copy buffer", turning
getsockopt(TCP_ZEROCOPY_RECEIVE) into a "hybrid" operation that allows
the caller to avoid a subsequent recvmsg() call to read the
stragglers.

2. Similarly, for "small" read payloads that are either below the size
of a page, or small enough that remapping pages is not a performance
win - we allow the user to short-circuit the remapping operations
entirely and simply copy into the buffer provided.

Some of the patches in the middle of this set are refactors to support
this "short-circuiting" optimization.

3. We allow the user to provide a hint that performing a page zap
operation (and the accompanying TLB shootdown) may not be necessary,
for the provided region that the kernel will attempt to map pages
into. This allows us to avoid this expensive operation while holding
the socket lock, which provides a significant performance advantage.

With all of these changes combined, "medium" sized receive traffic
(multiple tens to few hundreds of KB) see significant efficiency gains
when using TCP receive zerocopy instead of regular recvmsg(). For
example, with RPC-style traffic with 32KB messages, there is a roughly
15% efficiency improvement when using zerocopy. Without these changes,
there is a roughly 60-70% efficiency reduction with such messages when
employing zerocopy.

Arjun Roy (8):
  net-zerocopy: Copy straggler unaligned data for TCP Rx. zerocopy.
  net-tcp: Introduce tcp_recvmsg_locked().
  net-zerocopy: Refactor skb frag fast-forward op.
  net-zerocopy: Refactor frag-is-remappable test.
  net-zerocopy: Fast return if inq < PAGE_SIZE
  net-zerocopy: Introduce short-circuit small reads.
  net-zerocopy: Set zerocopy hint when data is copied
  net-zerocopy: Defer vm zap unless actually needed.

 include/uapi/linux/tcp.h |   4 +
 net/ipv4/tcp.c           | 446 +++++++++++++++++++++++++++++----------
 2 files changed, 343 insertions(+), 107 deletions(-)

-- 
2.29.2.576.ga3fc446d84-goog

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ