[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231128210843.876493-14-sashal@kernel.org>
Date: Tue, 28 Nov 2023 16:08:35 -0500
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Guo Ren <guoren@...nel.org>, Ingo Molnar <mingo@...nel.org>,
Waiman Long <longman@...hat.com>,
Sasha Levin <sashal@...nel.org>, linux-arch@...r.kernel.org
Subject: [PATCH AUTOSEL 5.15 14/15] asm-generic: qspinlock: fix queued_spin_value_unlocked() implementation
From: Linus Torvalds <torvalds@...ux-foundation.org>
[ Upstream commit 125b0bb95dd6bec81b806b997a4ccb026eeecf8f ]
We really don't want to do atomic_read() or anything like that, since we
already have the value, not the lock. The whole point of this is that
we've loaded the lock from memory, and we want to check whether the
value we loaded was a locked one or not.
The main use of this is the lockref code, which loads both the lock and
the reference count in one atomic operation, and then works on that
combined value. With the atomic_read(), the compiler would pointlessly
spill the value to the stack, in order to then be able to read it back
"atomically".
This is the qspinlock version of commit c6f4a9002252 ("asm-generic:
ticket-lock: Optimize arch_spin_value_unlocked()") which fixed this same
bug for ticket locks.
Cc: Guo Ren <guoren@...nel.org>
Cc: Ingo Molnar <mingo@...nel.org>
Cc: Waiman Long <longman@...hat.com>
Link: https://lore.kernel.org/all/CAHk-=whNRv0v6kQiV5QO6DJhjH4KEL36vWQ6Re8Csrnh4zbRkQ@mail.gmail.com/
Signed-off-by: Linus Torvalds <torvalds@...ux-foundation.org>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
include/asm-generic/qspinlock.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index d74b138255014..95cfcfb8a3b4d 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -41,7 +41,7 @@ static __always_inline int queued_spin_is_locked(struct qspinlock *lock)
*/
static __always_inline int queued_spin_value_unlocked(struct qspinlock lock)
{
- return !atomic_read(&lock.val);
+ return !lock.val.counter;
}
/**
--
2.42.0
Powered by blists - more mailing lists