lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <4c87bbf8-00a3-4666-b844-916edd678305@app.fastmail.com>
Date: Tue, 28 Oct 2025 10:42:49 +0100
From: "Arnd Bergmann" <arnd@...db.de>
To: "Ankur Arora" <ankur.a.arora@...cle.com>, linux-kernel@...r.kernel.org,
 Linux-Arch <linux-arch@...r.kernel.org>,
 linux-arm-kernel@...ts.infradead.org, linux-pm@...r.kernel.org,
 bpf@...r.kernel.org
Cc: "Catalin Marinas" <catalin.marinas@....com>,
 "Will Deacon" <will@...nel.org>, "Peter Zijlstra" <peterz@...radead.org>,
 "Andrew Morton" <akpm@...ux-foundation.org>,
 "Mark Rutland" <mark.rutland@....com>,
 "Haris Okanovic" <harisokn@...zon.com>,
 "Christoph Lameter (Ampere)" <cl@...two.org>,
 "Alexei Starovoitov" <ast@...nel.org>,
 "Rafael J . Wysocki" <rafael@...nel.org>,
 "Daniel Lezcano" <daniel.lezcano@...aro.org>,
 "Kumar Kartikeya Dwivedi" <memxor@...il.com>, zhenglifeng1@...wei.com,
 xueshuai@...ux.alibaba.com, "Joao Martins" <joao.m.martins@...cle.com>,
 "Boris Ostrovsky" <boris.ostrovsky@...cle.com>,
 "Konrad Rzeszutek Wilk" <konrad.wilk@...cle.com>
Subject: Re: [RESEND PATCH v7 1/7] asm-generic: barrier: Add
 smp_cond_load_relaxed_timeout()

On Tue, Oct 28, 2025, at 06:31, Ankur Arora wrote:

> + */
> +#ifndef smp_cond_load_relaxed_timeout
> +#define smp_cond_load_relaxed_timeout(ptr, cond_expr, time_check_expr)	\
> +({									\
> +	typeof(ptr) __PTR = (ptr);					\
> +	__unqual_scalar_typeof(*ptr) VAL;				\
> +	u32 __n = 0, __spin = SMP_TIMEOUT_POLL_COUNT;			\
> +									\
> +	for (;;) {							\
> +		VAL = READ_ONCE(*__PTR);				\
> +		if (cond_expr)						\
> +			break;						\
> +		cpu_poll_relax(__PTR, VAL);				\
> +		if (++__n < __spin)					\
> +			continue;					\
> +		if (time_check_expr) {					\
> +			VAL = READ_ONCE(*__PTR);			\
> +			break;						\
> +		}							\
> +		__n = 0;						\
> +	}								\
> +	(typeof(*ptr))VAL;						\
> +})
> +#endif

I'm trying to think of ideas for how this would done on arm64
with FEAT_FWXT in a way that doesn't hurt other architectures.

The best idea I've come up with is to change that inner loop
to combine the cpu_poll_relax() with the timecheck and then
define the 'time_check_expr' so it has to return an approximate
(ceiling) number of nanoseconds of remaining time or zero if
expired.

The FEAT_WFXT version would then look something like

static inline void __cmpwait_u64_timeout(volatile u64 *ptr, unsigned long val, __u64 ns)
{
   unsigned long tmp;
   asm volatile ("sev; wfe; ldxr; eor; cbnz; wfet; 1:"
        : "=&r" (tmp), "+Q" (*ptr)
        : "r" (val), "r" (ns));
}
#define cpu_poll_relax_timeout_wfet(__PTR, VAL, TIMECHECK) \
({                                                    \
       u64 __t = TIMECHECK;
       if (__t)
            __cmpwait_u64_timeout(__PTR, VAL, __t);
})

while the 'wfe' version would continue to do the timecheck after the
wait.

I have two lesser concerns with the generic definition here:

- having both a timeout and a spin counter in the same loop
  feels redundant and error-prone, as the behavior in practice
  would likely depend a lot on the platform. What is the reason
  for keeping the counter if we already have a fixed timeout
  condition?

- I generally dislike the type-agnostic macros like this one,
  it adds a lot of extra complexity here that I feel can be
  completely avoided if we make explicitly 32-bit and 64-bit
  wide versions of these macros. We probably won't be able
  to resolve this as part of your series, but ideally I'd like
  have explicitly-typed versions of cmpxchg(), smp_load_acquire()
  and all the related ones, the same way we do for atomic_*()
  and atomic64_*().

       Arnd

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ