[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221103182936.217120-1-longman@redhat.com>
Date: Thu, 3 Nov 2022 14:29:30 -0400
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Boqun Feng <boqun.feng@...il.com>
Cc: linux-kernel@...r.kernel.org, john.p.donnelly@...cle.com,
Hillf Danton <hdanton@...a.com>,
Mukesh Ojha <quic_mojha@...cinc.com>,
Ting11 Wang 王婷
<wangting11@...omi.com>, Waiman Long <longman@...hat.com>
Subject: [PATCH v5 0/6] lockinig/rwsem: Fix rwsem bugs & enable true lock handoff
v5:
- Drop patch 2 and replace it with 2 new patches disabling preemption on
all reader functions and writer functions respectively. The other
patches are adjusted accordingly.
v4:
- Update patch descriptions in patches 1 & 2 to make clear the live
lock conditions that are being fixed by these patches. There is no code
change from v3.
v3:
- Make a minor cleanup to patch 1.
- Add 3 more patches to implement true lock handoff.
It turns out the current waiter optimistic spinning code does not work
that well if we have RT tasks in the mix. This patch series include two
different fixes to resolve those issues. The last 3 patches modify the
handoff code to implement true lock handoff similar to that of mutex.
Waiman Long (6):
locking/rwsem: Prevent non-first waiter from spinning in down_write()
slowpath
locking/rwsem: Disable preemption at all down_read*() and up_read()
code paths
locking/rwsem: Disable preemption at all down_write*() and up_write()
code paths
locking/rwsem: Change waiter->hanodff_set to a handoff_state enum
locking/rwsem: Enable direct rwsem lock handoff
locking/rwsem: Update handoff lock events tracking
kernel/locking/lock_events_list.h | 6 +-
kernel/locking/rwsem.c | 237 ++++++++++++++++++++++--------
2 files changed, 181 insertions(+), 62 deletions(-)
--
2.31.1
Powered by blists - more mailing lists