[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YsK4Z9w0tFtgkni8@hirez.programming.kicks-ass.net>
Date: Mon, 4 Jul 2022 11:52:39 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: guoren@...nel.org
Cc: palmer@...osinc.com, arnd@...db.de, mingo@...hat.com,
will@...nel.org, longman@...hat.com, boqun.feng@...il.com,
linux-riscv@...ts.infradead.org, linux-arch@...r.kernel.org,
linux-kernel@...r.kernel.org, Guo Ren <guoren@...ux.alibaba.com>
Subject: Re: [PATCH V7 1/5] asm-generic: ticket-lock: Remove unnecessary
atomic_read
On Tue, Jun 28, 2022 at 04:17:03AM -0400, guoren@...nel.org wrote:
> From: Guo Ren <guoren@...ux.alibaba.com>
>
> Remove unnecessary atomic_read in arch_spin_value_unlocked(lock),
> because the value has been in lock. This patch could prevent
> arch_spin_value_unlocked contend spin_lock data again.
>
> Signed-off-by: Guo Ren <guoren@...ux.alibaba.com>
> Signed-off-by: Guo Ren <guoren@...nel.org>
> Cc: Peter Zijlstra (Intel) <peterz@...radead.org>
> Cc: Arnd Bergmann <arnd@...db.de>
> Cc: Palmer Dabbelt <palmer@...osinc.com>
> ---
> include/asm-generic/spinlock.h | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h
> index fdfebcb050f4..f1e4fa100f5a 100644
> --- a/include/asm-generic/spinlock.h
> +++ b/include/asm-generic/spinlock.h
> @@ -84,7 +84,9 @@ static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock)
>
> static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
> {
> - return !arch_spin_is_locked(&lock);
> + u32 val = lock.counter;
> +
> + return ((val >> 16) == (val & 0xffff));
> }
Wouldn't the right thing be to flip arch_spin_is_locked() and
arch_spin_value_is_unlocked() ?
diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h
index fdfebcb050f4..63ab4da262f2 100644
--- a/include/asm-generic/spinlock.h
+++ b/include/asm-generic/spinlock.h
@@ -68,23 +68,25 @@ static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
smp_store_release(ptr, (u16)val + 1);
}
-static __always_inline int arch_spin_is_locked(arch_spinlock_t *lock)
+static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock)
{
u32 val = atomic_read(lock);
- return ((val >> 16) != (val & 0xffff));
+ return (s16)((val >> 16) - (val & 0xffff)) > 1;
}
-static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock)
+static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
{
- u32 val = atomic_read(lock);
+ u32 val = lock.counter;
- return (s16)((val >> 16) - (val & 0xffff)) > 1;
+ return ((val >> 16) == (val & 0xffff));
}
-static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
+static __always_inline int arch_spin_is_locked(arch_spinlock_t *lock)
{
- return !arch_spin_is_locked(&lock);
+ arch_spinlock_t val = READ_ONCE(*lock);
+
+ return !arch_spin_value_unlocked(val);
}
#include <asm/qrwlock.h>
Powered by blists - more mailing lists