[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8cdfa77a-87f3-71eb-4dd7-0ac474632327@redhat.com>
Date: Tue, 5 Jul 2022 16:15:18 -0400
From: Waiman Long <longman@...hat.com>
To: Nicholas Piggin <npiggin@...il.com>,
Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Boqun Feng <boqun.feng@...il.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 11/13] locking/qspinlock: Use queued_spin_trylock in
pv_hybrid_queued_unfair_trylock
On 7/4/22 10:38, Nicholas Piggin wrote:
> Rather than open-code it as necessitated by the old function-renaming
> code generation that rendered queued_spin_trylock unavailable to use
> here.
>
> Signed-off-by: Nicholas Piggin <npiggin@...il.com>
> ---
> kernel/locking/qspinlock.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index cef0ca7d94e1..9db168753124 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -357,7 +357,7 @@ static inline bool pv_hybrid_queued_unfair_trylock(struct qspinlock *lock)
> int val = atomic_read(&lock->val);
>
> if (!(val & _Q_LOCKED_PENDING_MASK) &&
> - (cmpxchg_acquire(&lock->locked, 0, _Q_LOCKED_VAL) == 0)) {
> + queued_spin_trylock(lock)) {
> lockevent_inc(pv_lock_stealing);
> return true;
> }
I am not sure if the compiler will eliminate the duplicated
atomic_read() in queued_spin_trylock(). So unless it can generate the
same code, I would prefer to leave this alone.
Cheers,
Longman
Powered by blists - more mailing lists