[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <171105077065.10875.8732979745826273189.tip-bot2@tip-bot2>
Date: Thu, 21 Mar 2024 19:52:50 -0000
From: "tip-bot2 for Waiman Long" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: Ingo Molnar <mingo@...nel.org>, Waiman Long <longman@...hat.com>,
Boqun Feng <boqun.feng@...il.com>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: locking/core] locking/qspinlock: Always evaluate lockevent*
non-event parameter once
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 3774b28d8f3b9e8a946beb9550bee85e5454fc9f
Gitweb: https://git.kernel.org/tip/3774b28d8f3b9e8a946beb9550bee85e5454fc9f
Author: Waiman Long <longman@...hat.com>
AuthorDate: Mon, 18 Mar 2024 20:50:04 -04:00
Committer: Ingo Molnar <mingo@...nel.org>
CommitterDate: Thu, 21 Mar 2024 20:45:17 +01:00
locking/qspinlock: Always evaluate lockevent* non-event parameter once
The 'inc' parameter of lockevent_add() and the cond parameter of
lockevent_cond_inc() are only evaluated when CONFIG_LOCK_EVENT_COUNTS
is on. That can cause problem if those parameters are expressions
with side effect like a "++". Fix this by evaluating those non-event
parameters once even if CONFIG_LOCK_EVENT_COUNTS is off. This will also
eliminate the need of the __maybe_unused attribute to the wait_early
local variable in pv_wait_node().
Suggested-by: Ingo Molnar <mingo@...nel.org>
Signed-off-by: Waiman Long <longman@...hat.com>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Reviewed-by: Boqun Feng <boqun.feng@...il.com>
Link: https://lore.kernel.org/r/20240319005004.1692705-1-longman@redhat.com
---
kernel/locking/lock_events.h | 4 ++--
kernel/locking/qspinlock_paravirt.h | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/locking/lock_events.h b/kernel/locking/lock_events.h
index a6016b9..d2345e9 100644
--- a/kernel/locking/lock_events.h
+++ b/kernel/locking/lock_events.h
@@ -53,8 +53,8 @@ static inline void __lockevent_add(enum lock_events event, int inc)
#else /* CONFIG_LOCK_EVENT_COUNTS */
#define lockevent_inc(ev)
-#define lockevent_add(ev, c)
-#define lockevent_cond_inc(ev, c)
+#define lockevent_add(ev, c) do { (void)(c); } while (0)
+#define lockevent_cond_inc(ev, c) do { (void)(c); } while (0)
#endif /* CONFIG_LOCK_EVENT_COUNTS */
diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h
index ae2b12f..169950f 100644
--- a/kernel/locking/qspinlock_paravirt.h
+++ b/kernel/locking/qspinlock_paravirt.h
@@ -294,7 +294,7 @@ static void pv_wait_node(struct mcs_spinlock *node, struct mcs_spinlock *prev)
{
struct pv_node *pn = (struct pv_node *)node;
struct pv_node *pp = (struct pv_node *)prev;
- bool __maybe_unused wait_early;
+ bool wait_early;
int loop;
for (;;) {
Powered by blists - more mailing lists