lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 9 Jan 2024 18:21:26 -0800
From: Josh Triplett <josh@...htriplett.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Kees Cook <kees@...nel.org>, Kees Cook <keescook@...omium.org>,
	linux-kernel@...r.kernel.org, Alexey Dobriyan <adobriyan@...il.com>
Subject: Re: [GIT PULL] execve updates for v6.8-rc1

I'm not going to spend a lot of time and energy attempting to argue that
spawnbench is a representative benchmark, or that scheduling overhead is
a relevant part of program launch.

Instead, here are some numbers from Linus's suggested benchmark
(modified to use execvpe, and to count down rather than up so it doesn't
need two arguments; modified version and benchmark driver script
attached; compiled with `musl-gcc -Wall -O3 -s -static`):


With no patch (for-next/execve before the top two patches,
21ca59b365c091d583f36ac753eaa8baf947be6f):

=== With only PATH ===
0.32user 4.08system 0:04.55elapsed 96%CPU (0avgtext+0avgdata 1280maxresident)k
0inputs+0outputs (0major+1294599minor)pagefaults 0swaps

=== With 64 extra environment variables ===
0.29user 5.33system 0:05.76elapsed 97%CPU (0avgtext+0avgdata 1280maxresident)k
0inputs+0outputs (0major+1312477minor)pagefaults 0swaps


With my fastpath patch (for-next/execve,
0a8a952a75f2c5c140939c1616423e240677666c):

=== With only PATH ===
0.27user 2.40system 0:02.73elapsed 98%CPU (0avgtext+0avgdata 1152maxresident)k
0inputs+0outputs (0major+695002minor)pagefaults 0swaps

=== With 64 extra environment variables ===
0.29user 2.59system 0:02.94elapsed 98%CPU (0avgtext+0avgdata 1152maxresident)k
0inputs+0outputs (0major+712606minor)pagefaults 0swaps


With Linus's fastpath patch ("no patch" with Linus's applied, and the
followup -ENOMEM fix applied):

=== With only PATH ===
0.28user 2.44system 0:02.80elapsed 97%CPU (0avgtext+0avgdata 1152maxresident)k
0inputs+0outputs (0major+694706minor)pagefaults 0swaps

=== With 64 extra environment variables ===
0.29user 2.68system 0:03.06elapsed 97%CPU (0avgtext+0avgdata 1152maxresident)k
0inputs+0outputs (0major+712431minor)pagefaults 0swaps


I can reliably reproduce the differences between these three kernels;
the differences are well outside the noise. Both fastpaths are *much*
faster than the baseline, but the double-lookup version is still
consistently faster than the single-lookup version.

I'm sure it's *possible* to create a benchmark in which the
single-lookup version is faster. But in this benchmark of *just*
execvpe, it's still the case that double-lookup is faster, for *some*
reason.

I agree that it *shouldn't* be, and yet.

View attachment "t-linus-modified.c" of type "text/x-csrc" (312 bytes)

View attachment "bench" of type "text/plain" (336 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ