[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1c9cb6b4-f7ba-4480-b94c-dc2fa0f11949@gmail.com>
Date: Wed, 25 Dec 2024 12:15:15 +0900
From: Akira Yokosawa <akiyks@...il.com>
To: maobibo@...ngson.cn
Cc: boqun.feng@...il.com, linux-kernel@...r.kernel.org, longman@...hat.com,
mingo@...hat.com, will@...nel.org, Akira Yokosawa <akiyks@...il.com>
Subject: Re: [PATCH] locking/pvqspinlock: Use try_cmpxchg() in pv_unhash
Hi,
[With my LKMM reviewer hat on]
On Mon, 23 Dec 2024 15:47:04 +0800, Bibo Mao wrote:
> We ported pv spinlock to old linux kernel on LoongArch platform, there is
> error with some stress tests. The error report is something like this for
> short:
> kernel BUG at kernel/locking/qspinlock_paravirt.h:261!
> Oops - BUG[#1]:
> CPU: 1 PID: 6613 Comm: pidof Not tainted 4.19.190+ #43
> Hardware name: Loongson KVM, BIOS 0.0.0 02/06/2015
> ra: 9000000000509cfc do_task_stat+0x29c/0xaf0
> ERA: 9000000000291308 __pv_queued_spin_unlock_slowpath+0xf8/0x100
> CRMD: 000000b0 (PLV0 -IE -DA +PG DACF=CC DACM=CC -WE)
> PRMD: 00000000 (PPLV0 -PIE -PWE)
> ...
> Call Trace:
> [<9000000000291308>] __pv_queued_spin_unlock_slowpath+0xf8/0x100
> [<9000000000509cf8>] do_task_stat+0x298/0xaf0
> [<9000000000502570>] proc_single_show+0x60/0xe0
>
> The problem is that memory accessing is out of order on LoongArch
> platform, there is contension between pv_unhash() and pv_hash().
>
> CPU0 pv_unhash: CPU1 pv_hash:
>
> for_each_hash_entry(he, offset, hash) { for_each_hash_entry(he, offset, hash) {
> if (READ_ONCE(he->lock) == lock) { struct qspinlock *old = NULL;
> node = READ_ONCE(he->node);
> WRITE_ONCE(he->lock, NULL);
>
> On LoongArch platform which is out of order, the execution order may be
> switched like this:
>> WRITE_ONCE(he->lock, NULL);
> if (try_cmpxchg(&he->lock, &old, lock)) {
> WRITE_ONCE(he->node, node);
> return &he->lock;
>
> CPU1 pv_hash() is executing and watch that lock is set with NULL. Write
> he->node with node of new lock.
>> node = READ_ONCE(he->node);
> READ_ONCE(he->node) on CPU0 will return node of new lock rather than itself.
>
> Here READ_ONCE/WRITE_ONCE is replaced with try_cmpxchg().
>
> Signed-off-by: Bibo Mao <maobibo@...ngson.cn>
> ---
> kernel/locking/qspinlock_paravirt.h | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h
> index dc1cb90e3644..cc4eabce092d 100644
> --- a/kernel/locking/qspinlock_paravirt.h
> +++ b/kernel/locking/qspinlock_paravirt.h
> @@ -240,9 +240,10 @@ static struct pv_node *pv_unhash(struct qspinlock *lock)
> struct pv_node *node;
>
> for_each_hash_entry(he, offset, hash) {
> - if (READ_ONCE(he->lock) == lock) {
> + struct qspinlock *old = lock;
> +
> + if (try_cmpxchg(&he->lock, &old, NULL))
> node = READ_ONCE(he->node);
> - WRITE_ONCE(he->lock, NULL);
> return node;
> }
> }
But this change might delay load of he->node *after* he->lock returns to NULL.
Let's get back to the current code (plus labeling ONCE accesses):
for_each_hash_entry(he, offset, hash) {
if (READ_ONCE(he->lock) == lock) { /* A */
node = READ_ONCE(he->node); /* B */
WRITE_ONCE(he->lock, NULL); /* C */
return node;
}
}
It looks to me you want to guarantee the ordering of A -> B -> C.
Your change effectively provides ordering of [A C] -> B.
[A C] is done in an atomic RMW op.
For the ordering of A -> B -> C, I'd change the code to
for_each_hash_entry(he, offset, hash) {
if (smp_load_acquire(&he->lock) == lock) { /* A */
node = READ_ONCE(he->node); /* B */
smp_store_release(&he->lock, NULL); /* C */
return node;
}
}
Note: A -> B is load-to-load ordering, and it is not provided by
control dependency. Upgrading A to ACQUIRE is a lightweight option.
I'm expecting Will and/or Boqun chiming in after holiday break.
Thanks, Akira
>
> base-commit: 48f506ad0b683d3e7e794efa60c5785c4fdc86fa
> --
> 2.39.3
>
Powered by blists - more mailing lists