[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211013134154.1085649-3-yanfei.xu@windriver.com>
Date: Wed, 13 Oct 2021 21:41:53 +0800
From: Yanfei Xu <yanfei.xu@...driver.com>
To: peterz@...radead.org, mingo@...hat.com, will@...nel.org,
longman@...hat.com, boqun.feng@...il.com
Cc: linux-kernel@...r.kernel.org
Subject: [PATCH v2 2/3] locking/rwsem: disable preemption for spinning region
The spinning region rwsem_spin_on_owner() should not be preempted,
however the rwsem_down_write_slowpath() invokes it and don't disable
preemption. Fix it by adding a pair of preempt_disable/enable().
Signed-off-by: Yanfei Xu <yanfei.xu@...driver.com>
---
kernel/locking/rwsem.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 7b5af452ace2..06925b43c3e7 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -1024,6 +1024,7 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
enum writer_wait_state wstate;
struct rwsem_waiter waiter;
struct rw_semaphore *ret = sem;
+ enum owner_state owner_state;
DEFINE_WAKE_Q(wake_q);
/* do optimistic spinning and steal lock if possible */
@@ -1099,9 +1100,13 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
* In this case, we attempt to acquire the lock again
* without sleeping.
*/
- if (wstate == WRITER_HANDOFF &&
- rwsem_spin_on_owner(sem) == OWNER_NULL)
- goto trylock_again;
+ if (wstate == WRITER_HANDOFF) {
+ preempt_disable();
+ owner_state = rwsem_spin_on_owner(sem);
+ preempt_enable();
+ if (owner_state == OWNER_NULL)
+ goto trylock_again;
+ }
/* Block until there are no active lockers. */
for (;;) {
--
2.27.0
Powered by blists - more mailing lists