lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAADnVQKYoE85HFAOE5OBFpKbXej=h12m4DVvHuPViJSjAncK4A@mail.gmail.com>
Date: Mon, 15 Dec 2025 13:40:06 -0800
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Ankur Arora <ankur.a.arora@...cle.com>
Cc: LKML <linux-kernel@...r.kernel.org>, linux-arch <linux-arch@...r.kernel.org>, 
	linux-arm-kernel <linux-arm-kernel@...ts.infradead.org>, 
	Linux Power Management <linux-pm@...r.kernel.org>, bpf <bpf@...r.kernel.org>, 
	Arnd Bergmann <arnd@...db.de>, Catalin Marinas <catalin.marinas@....com>, Will Deacon <will@...nel.org>, 
	Peter Zijlstra <peterz@...radead.org>, Andrew Morton <akpm@...ux-foundation.org>, 
	Mark Rutland <mark.rutland@....com>, harisokn@...zon.com, 
	Christoph Lameter <cl@...two.org>, Alexei Starovoitov <ast@...nel.org>, "Rafael J. Wysocki" <rafael@...nel.org>, 
	Daniel Lezcano <daniel.lezcano@...aro.org>, Kumar Kartikeya Dwivedi <memxor@...il.com>, zhenglifeng1@...wei.com, 
	xueshuai@...ux.alibaba.com, joao.m.martins@...cle.com, 
	Boris Ostrovsky <boris.ostrovsky@...cle.com>, konrad.wilk@...cle.com
Subject: Re: [PATCH v8 10/12] bpf/rqspinlock: Use smp_cond_load_acquire_timeout()

On Sun, Dec 14, 2025 at 8:51 PM Ankur Arora <ankur.a.arora@...cle.com> wrote:
>
>  /**
>   * resilient_queued_spin_lock_slowpath - acquire the queued spinlock
>   * @lock: Pointer to queued spinlock structure
> @@ -415,7 +415,9 @@ int __lockfunc resilient_queued_spin_lock_slowpath(rqspinlock_t *lock, u32 val)
>          */
>         if (val & _Q_LOCKED_MASK) {
>                 RES_RESET_TIMEOUT(ts, RES_DEF_TIMEOUT);
> -               res_smp_cond_load_acquire(&lock->locked, !VAL || RES_CHECK_TIMEOUT(ts, timeout_err, _Q_LOCKED_MASK) < 0);
> +               smp_cond_load_acquire_timeout(&lock->locked, !VAL,
> +                                             (timeout_err = clock_deadlock(lock, _Q_LOCKED_MASK, &ts)),
> +                                             ts.duration);

I'm pretty sure we already discussed this and pointed out that
this approach is not acceptable.
We cannot call ktime_get_mono_fast_ns() first.
That's why RES_CHECK_TIMEOUT() exists and it does
if (!(ts).spin++)
before doing the first check_timeout() that will do ktime_get_mono_fast_ns().
Above is a performance critical lock acquisition path where
pending is spinning on the lock word waiting for the owner to
release the lock.
Adding unconditional ktime_get_mono_fast_ns() will destroy
performance for quick critical section.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ