lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240613001215.648829-1-mjguzik@gmail.com>
Date: Thu, 13 Jun 2024 02:12:13 +0200
From: Mateusz Guzik <mjguzik@...il.com>
To: torvalds@...ux-foundation.org
Cc: brauner@...nel.org,
	viro@...iv.linux.org.uk,
	jack@...e.cz,
	linux-kernel@...r.kernel.org,
	linux-fsdevel@...r.kernel.org,
	Mateusz Guzik <mjguzik@...il.com>
Subject: [PATCH 0/2] stop lockref from degrading to locked-only ops

... and speed up parallel lookups of the same terminal inode

Hi Linus. It is rather unclear who to mail concerning lockref. Last time
I had issues with it (the cpu_relax removal) you were quite responsive
and the resulting commit is the last one made there, so I figured I'm
going to rope you in.

lockref has a corner case where it can degrade to always taking the spin
lock (triggerable while benchmarking). In the past I sometimes ran into
it and ignored the problem, but it started showing up a lot with my
dentry patch (outlined below). I think it is time to address it.

The dentry thing moves d_lockref out of an area used by rcu lookup.
This provides a significant speed up when doing parallel stat on the
same file (will-it-scale, see [1] for the bench).

Results (see patch 2):
> before: 5038006
> after:  7842883 (+55%)

> One minor remark: the 'after' result is unstable, fluctuating in the
> range ~7.8 mln to ~9 mln during different runs.

The speed up also means the vulnerable code executes more often per
second, making it more likely to spot a transient lock acquire by
something unrelated and decide to lock as well, starting the cascade.

If I leave it running it eventually degrades to locked-only operation,
stats look like this:
> min:417297 max:426249 total:8401766     <-- expected performance
> min:219024 max:221764 total:4398413     <-- some locking started
> min:62855 max:64949 total:1273712       <-- everyone degraded
> min:62472 max:64873 total:1268733
> min:62179 max:63843 total:1256375

Sometimes takes literally few seconds, other times it takes few minutes.

I don't know who transiently takes the d_lock and I don't think it is
particularly relevant. I did find that I can trivially trigger the
problem by merely issuing "ls /tmp" a few times. It does depend on
tmpfs, no problem with ext4 at least.

Bottom line though is that if the d_lockref reordering lands and this
issue is not addressed, the lkp folks (or whoever else benchmarking) may
trigger the bug and report a bogus regression.

Even if the d_lockref patch gets rejected I would argue the problem
should be sorted out, it is going to eventually bite someone.

I wrote the easiest variant of the fix I could think of but I'm not
married to any specific way to solve it.

If the vfs things is accepted it needs to land after the lockref issue
is resolved, thus I'm mailing both in the same patchset.

[1] https://github.com/antonblanchard/will-it-scale/pull/35

Mateusz Guzik (2):
  lockref: speculatively spin waiting for the lock to be released
  vfs: move d_lockref out of the area used by RCU lookup

 include/linux/dcache.h |  7 +++-
 lib/lockref.c          | 85 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 91 insertions(+), 1 deletion(-)

-- 
2.43.0


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ