[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y9J2HkiyLDmGPWyn@gmail.com>
Date: Thu, 26 Jan 2023 13:46:22 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Waiman Long <longman@...hat.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Boqun Feng <boqun.feng@...il.com>,
linux-kernel@...r.kernel.org, john.p.donnelly@...cle.com,
Hillf Danton <hdanton@...a.com>,
Mukesh Ojha <quic_mojha@...cinc.com>,
Ting11 Wang 王婷 <wangting11@...omi.com>
Subject: Re: [PATCH v7 0/4] lockinig/rwsem: Fix rwsem bugs & enable true lock
handoff
* Waiman Long <longman@...hat.com> wrote:
> v7:
> - Add a comment to down_read_non_owner() in patch 2.
> - Drop v6 patches 4 & 6 and simplify the direct rwsem lock handoff
> patch as suggested by PeterZ.
>
> v6:
> - Fix an error in patch 2 reported by kernel test robot.
>
> v5:
> - Drop patch 2 and replace it with 2 new patches disabling preemption on
> all reader functions and writer functions respectively. The other
> patches are adjusted accordingly.
>
> It turns out the current waiter optimistic spinning code does not work
> that well if we have RT tasks in the mix. This patch series include two
> different fixes to resolve those issues. The last 3 patches modify the
> handoff code to implement true lock handoff similar to that of mutex.
>
> Waiman Long (4):
> locking/rwsem: Prevent non-first waiter from spinning in down_write()
> slowpath
> locking/rwsem: Disable preemption at all down_read*() and up_read()
> code paths
> locking/rwsem: Disable preemption at all down_write*() and up_write()
> code paths
> locking/rwsem: Enable direct rwsem lock handoff
>
> kernel/locking/rwsem.c | 161 +++++++++++++++++++++++++++++------------
> 1 file changed, 115 insertions(+), 46 deletions(-)
So as a first step I've applied the first 3 patches to the locking tree,
which are arguably fixes.
Thanks,
Ingo
Powered by blists - more mailing lists