[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YsR2IQsnqAOgDxXu@worktop.programming.kicks-ass.net>
Date: Tue, 5 Jul 2022 19:34:25 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Nicholas Piggin <npiggin@...il.com>
Cc: Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Waiman Long <longman@...hat.com>,
Boqun Feng <boqun.feng@...il.com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 12/13] locking/qspinlock: separate pv_wait_node from the
non-paravirt path
On Tue, Jul 05, 2022 at 12:38:19AM +1000, Nicholas Piggin wrote:
> pv_wait_node waits until node->locked is non-zero, no need for the
> pv case to wait again by also executing the !pv code path.
>
> Signed-off-by: Nicholas Piggin <npiggin@...il.com>
> ---
> kernel/locking/qspinlock.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 9db168753124..19e2f286be0a 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -862,10 +862,11 @@ static inline void queued_spin_lock_mcs_queue(struct qspinlock *lock, bool parav
> /* Link @node into the waitqueue. */
> WRITE_ONCE(prev->next, node);
>
> + /* Wait for mcs node lock to be released */
> if (paravirt)
> pv_wait_node(node, prev);
> - /* Wait for mcs node lock to be released */
> - smp_cond_load_acquire(&node->locked, VAL);
> + else
> + smp_cond_load_acquire(&node->locked, VAL);
>
(from patch #6):
+static void pv_wait_node(struct qnode *node, struct qnode *prev)
+{
+ int loop;
+ bool wait_early;
+
...
+
+ /*
+ * By now our node->locked should be 1 and our caller will not actually
+ * spin-wait for it. We do however rely on our caller to do a
+ * load-acquire for us.
+ */
+}
Powered by blists - more mailing lists