[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1662028090-26495-1-git-send-email-quic_mojha@quicinc.com>
Date: Thu, 1 Sep 2022 15:58:10 +0530
From: Mukesh Ojha <quic_mojha@...cinc.com>
To: <peterz@...radead.org>, <mingo@...hat.com>, <will@...nel.org>,
<longman@...hat.com>, <boqun.feng@...il.com>
CC: <linux-kernel@...r.kernel.org>,
Gokul krishna Krishnakumar <quic_gokukris@...cinc.com>,
Mukesh Ojha <quic_mojha@...cinc.com>
Subject: [PATCH] locking/rwsem: Disable preemption while trying for rwsem lock
From: Gokul krishna Krishnakumar <quic_gokukris@...cinc.com>
Make the region inside the rwsem_write_trylock non preemptible.
We observe RT task is hogging CPU when trying to acquire rwsem lock
which was acquired by a kworker task but before the rwsem owner was set.
Here is the scenario:
1. CFS task (affined to a particular CPU) takes rwsem lock.
2. CFS task gets preempted by a RT task before setting owner.
3. RT task (FIFO) is trying to acquire the lock, but spinning until
RT throttling happens for the lock as the lock was taken by CFS task.
This patch attempts to fix the above issue by disabling preemption
until owner is set for the lock. while at it also fix this issue
at the place where owner being set/cleared.
Signed-off-by: Gokul krishna Krishnakumar <quic_gokukris@...cinc.com>
Signed-off-by: Mukesh Ojha <quic_mojha@...cinc.com>
---
kernel/locking/rwsem.c | 18 ++++++++++++++----
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 65f0262..3b4b32e 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -251,13 +251,16 @@ static inline bool rwsem_read_trylock(struct rw_semaphore *sem, long *cntp)
static inline bool rwsem_write_trylock(struct rw_semaphore *sem)
{
long tmp = RWSEM_UNLOCKED_VALUE;
+ bool ret = false;
+ preempt_disable();
if (atomic_long_try_cmpxchg_acquire(&sem->count, &tmp, RWSEM_WRITER_LOCKED)) {
rwsem_set_owner(sem);
- return true;
+ ret = true;
}
- return false;
+ preempt_enable();
+ return ret;
}
/*
@@ -686,16 +689,21 @@ enum owner_state {
static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem)
{
long count = atomic_long_read(&sem->count);
+ bool ret = false;
+ preempt_disable();
while (!(count & (RWSEM_LOCK_MASK|RWSEM_FLAG_HANDOFF))) {
if (atomic_long_try_cmpxchg_acquire(&sem->count, &count,
count | RWSEM_WRITER_LOCKED)) {
rwsem_set_owner(sem);
lockevent_inc(rwsem_opt_lock);
- return true;
+ ret = true;
+ break;
}
}
- return false;
+
+ preempt_enable();
+ return ret;
}
static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
@@ -1352,8 +1360,10 @@ static inline void __up_write(struct rw_semaphore *sem)
DEBUG_RWSEMS_WARN_ON((rwsem_owner(sem) != current) &&
!rwsem_test_oflags(sem, RWSEM_NONSPINNABLE), sem);
+ preempt_disable();
rwsem_clear_owner(sem);
tmp = atomic_long_fetch_add_release(-RWSEM_WRITER_LOCKED, &sem->count);
+ preempt_enable();
if (unlikely(tmp & RWSEM_FLAG_WAITERS))
rwsem_wake(sem);
}
--
2.7.4
Powered by blists - more mailing lists