[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230327202413.1955856-3-longman@redhat.com>
Date: Mon, 27 Mar 2023 16:24:07 -0400
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Boqun Feng <boqun.feng@...il.com>
Cc: linux-kernel@...r.kernel.org
Subject: [PATCH v2 2/8] locking/rwsem: Enforce queueing when HANDOFF
From: Peter Zijlstra <peterz@...radead.org>
Ensure that HANDOFF disables all spinning and stealing.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
---
kernel/locking/rwsem.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index e589f69793df..4b9e492abd59 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -468,7 +468,12 @@ static void rwsem_mark_wake(struct rw_semaphore *sem,
adjustment -= RWSEM_FLAG_HANDOFF;
lockevent_inc(rwsem_rlock_handoff);
}
+ /*
+ * With HANDOFF set for reader, we must
+ * terminate all spinning.
+ */
waiter->handoff_set = true;
+ rwsem_set_nonspinnable(sem);
}
atomic_long_add(-adjustment, &sem->count);
@@ -755,6 +760,10 @@ rwsem_spin_on_owner(struct rw_semaphore *sem)
owner = rwsem_owner_flags(sem, &flags);
state = rwsem_owner_state(owner, flags);
+
+ if (owner == current)
+ return OWNER_NONSPINNABLE; /* Handoff granted */
+
if (state != OWNER_WRITER)
return state;
--
2.31.1
Powered by blists - more mailing lists