[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b64b39ab-58a0-8046-026a-8d635f3f762b@redhat.com>
Date: Wed, 11 May 2022 12:00:33 -0400
From: Waiman Long <longman@...hat.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Ingo Molnar <mingo@...hat.com>, Will Deacon <will@...nel.org>,
Boqun Feng <boqun.feng@...il.com>,
Arnd Bergmann <arnd@...db.de>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] locking/qrwlock: Reduce cacheline contention for
rwlocks used in interrupt context
On 5/11/22 09:34, Peter Zijlstra wrote:
> On Wed, May 11, 2022 at 08:44:55AM -0400, Waiman Long wrote:
>
>>> I'm confused; prior to this change:
>>>
>>> CPU0 CPU1
>>>
>>> write_lock_irq(&l)
>>> read_lock(&l)
>>> <INRQ>
>>> read_lock(&l)
>>> ...
>>>
>>> was not deadlock, but now it would AFAICT.
>> Oh you are right. I missed that scenario in my analysis. My bad.
> No worries; I suppose we can also still do something like:
>
> void queued_read_lock_slowpath(struct qrwlock *lock, int cnts)
> {
> /*
> * the big comment
> */
> if (unlikely(in_interrupt())) {
> /*
> * If not write-locked, insta-grant the reader
> */
> if (!(cnts & _QW_LOCKED))
> return;
>
> /*
> * otherwise, wait for the writer to go away.
> */
> atomic_cond_read_acquire(&lock->cnts, !(VAL & _QW_LOCKED));
> return;
> }
>
> ...
> }
>
> Which saves one load in some cases... not sure it's worth it though.
Yes, it is a micro-optimization that can be done. The gain, if any,
should be minor though.
Cheers,
Longman
Powered by blists - more mailing lists