[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221118022016.462070-1-longman@redhat.com>
Date: Thu, 17 Nov 2022 21:20:10 -0500
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Boqun Feng <boqun.feng@...il.com>
Cc: linux-kernel@...r.kernel.org, john.p.donnelly@...cle.com,
Hillf Danton <hdanton@...a.com>,
Mukesh Ojha <quic_mojha@...cinc.com>,
Ting11 Wang 王婷
<wangting11@...omi.com>, Waiman Long <longman@...hat.com>
Subject: [PATCH v6 0/6] lockinig/rwsem: Fix rwsem bugs & enable true lock handoff
v6:
- Fix an error in patch 2 reported by kernel test robot.
v5:
- Drop patch 2 and replace it with 2 new patches disabling preemption on
all reader functions and writer functions respectively. The other
patches are adjusted accordingly.
v4:
- Update patch descriptions in patches 1 & 2 to make clear the live
lock conditions that are being fixed by these patches. There is no code
change from v3.
It turns out the current waiter optimistic spinning code does not work
that well if we have RT tasks in the mix. This patch series include two
different fixes to resolve those issues. The last 3 patches modify the
handoff code to implement true lock handoff similar to that of mutex.
Waiman Long (6):
locking/rwsem: Prevent non-first waiter from spinning in down_write()
slowpath
locking/rwsem: Disable preemption at all down_read*() and up_read()
code paths
locking/rwsem: Disable preemption at all down_write*() and up_write()
code paths
locking/rwsem: Change waiter->hanodff_set to a handoff_state enum
locking/rwsem: Enable direct rwsem lock handoff
locking/rwsem: Update handoff lock events tracking
kernel/locking/lock_events_list.h | 6 +-
kernel/locking/rwsem.c | 240 ++++++++++++++++++++++--------
2 files changed, 182 insertions(+), 64 deletions(-)
--
2.31.1
Powered by blists - more mailing lists