[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87plbwk5y2.fsf@oracle.com>
Date: Thu, 11 Sep 2025 14:57:57 -0700
From: Ankur Arora <ankur.a.arora@...cle.com>
To: Catalin Marinas <catalin.marinas@....com>
Cc: Ankur Arora <ankur.a.arora@...cle.com>, linux-kernel@...r.kernel.org,
linux-arch@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
bpf@...r.kernel.org, arnd@...db.de, will@...nel.org,
peterz@...radead.org, akpm@...ux-foundation.org, mark.rutland@....com,
harisokn@...zon.com, cl@...two.org, ast@...nel.org, memxor@...il.com,
zhenglifeng1@...wei.com, xueshuai@...ux.alibaba.com,
joao.m.martins@...cle.com, boris.ostrovsky@...cle.com,
konrad.wilk@...cle.com
Subject: Re: [PATCH v5 5/5] rqspinlock: Use smp_cond_load_acquire_timeout()
Catalin Marinas <catalin.marinas@....com> writes:
> On Wed, Sep 10, 2025 at 08:46:55PM -0700, Ankur Arora wrote:
>> Switch out the conditional load inerfaces used by rqspinlock
>> to smp_cond_read_acquire_timeout().
>> This interface handles the timeout check explicitly and does any
>> necessary amortization, so use check_timeout() directly.
>
> It's worth mentioning that the default smp_cond_load_acquire_timeout()
> implementation (without hardware support) only spins 200 times instead
> of 16K times in the rqspinlock code. That's probably fine but it would
> be good to have confirmation from Kumar or Alexei.
As Kumar mentions, I'll redefine the count locally in rqspinlock.c to 16k.
>> diff --git a/kernel/bpf/rqspinlock.c b/kernel/bpf/rqspinlock.c
>> index 5ab354d55d82..4d2c12d131ae 100644
>> --- a/kernel/bpf/rqspinlock.c
>> +++ b/kernel/bpf/rqspinlock.c
> [...]
>> @@ -313,11 +307,8 @@ EXPORT_SYMBOL_GPL(resilient_tas_spin_lock);
>> */
>> static DEFINE_PER_CPU_ALIGNED(struct qnode, rqnodes[_Q_MAX_NODES]);
>>
>> -#ifndef res_smp_cond_load_acquire
>> -#define res_smp_cond_load_acquire(v, c) smp_cond_load_acquire(v, c)
>> -#endif
>> -
>> -#define res_atomic_cond_read_acquire(v, c) res_smp_cond_load_acquire(&(v)->counter, (c))
>> +#define res_atomic_cond_read_acquire_timeout(v, c, t) \
>> + smp_cond_load_acquire_timeout(&(v)->counter, (c), (t))
>
> BTW, we have atomic_cond_read_acquire() which accesses the 'counter' of
> an atomic_t. You might as well add an atomic_cond_read_acquire_timeout()
> in atomic.h than open-code the atomic_t internals here.
Good point. That also keeps it close to the locking/qspinlock.c
use of atomic_cond_read_acquire().
Will add atomic_cond_read_acquire_timeout() (and other variants that
we define in include/linux/atomic.h).
> Otherwise the patch looks fine to me, much simpler than the previous
> attempt.
>
> Reviewed-by: Catalin Marinas <catalin.marinas@....com>
Thanks!
--
ankur
Powered by blists - more mailing lists