lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Sat, 16 Nov 2019 13:13:09 -0800 (PST)
From:   David Miller <davem@...emloft.net>
To:     edumazet@...gle.com
Cc:     netdev@...r.kernel.org, eric.dumazet@...il.com, soheil@...gle.com,
        arjunroy@...gle.com
Subject: Re: [PATCH net-next] selftests: net: avoid ptl lock contention in
 tcp_mmap

From: Eric Dumazet <edumazet@...gle.com>
Date: Fri, 15 Nov 2019 17:55:54 -0800

> tcp_mmap is used as a reference program for TCP rx zerocopy,
> so it is important to point out some potential issues.
> 
> If multiple threads are concurrently using getsockopt(...
> TCP_ZEROCOPY_RECEIVE), there is a chance the low-level mm
> functions compete on shared ptl lock, if vma are arbitrary placed.
> 
> Instead of letting the mm layer place the chunks back to back,
> this patch enforces an alignment so that each thread uses
> a different ptl lock.
> 
> Performance measured on a 100 Gbit NIC, with 8 tcp_mmap clients
> launched at the same time :
> 
> $ for f in {1..8}; do ./tcp_mmap -H 2002:a05:6608:290:: & done
> 
> In the following run, we reproduce the old behavior by requesting no alignment :
> 
> $ tcp_mmap -sz -C $((128*1024)) -a 4096
 ...
> New behavior (automatic alignment based on Hugepagesize),
> we can see the system overhead being dramatically reduced.
> 
> $ tcp_mmap -sz -C $((128*1024))
 ...
> Signed-off-by: Eric Dumazet <edumazet@...gle.com>

Applied, thanks Eric.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ