[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5588CB2C.108@hp.com>
Date: Mon, 22 Jun 2015 22:57:48 -0400
From: Waiman Long <waiman.long@...com>
To: Will Deacon <will.deacon@....com>
CC: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...hat.com>, Arnd Bergmann <arnd@...db.de>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Scott J Norton <scott.norton@...com>,
Douglas Hatch <doug.hatch@...com>
Subject: Re: [PATCH v5 3/3] locking/qrwlock: Don't contend with readers when
setting _QW_WAITING
On 06/22/2015 12:21 PM, Will Deacon wrote:
> Hi Waiman,
>
> On Fri, Jun 19, 2015 at 04:50:02PM +0100, Waiman Long wrote:
>> The current cmpxchg() loop in setting the _QW_WAITING flag for writers
>> in queue_write_lock_slowpath() will contend with incoming readers
>> causing possibly extra cmpxchg() operations that are wasteful. This
>> patch changes the code to do a byte cmpxchg() to eliminate contention
>> with new readers.
> [...]
>
>> diff --git a/arch/x86/include/asm/qrwlock.h b/arch/x86/include/asm/qrwlock.h
>> index a8810bf..5678b0a 100644
>> --- a/arch/x86/include/asm/qrwlock.h
>> +++ b/arch/x86/include/asm/qrwlock.h
>> @@ -7,8 +7,7 @@
>> #define queued_write_unlock queued_write_unlock
>> static inline void queued_write_unlock(struct qrwlock *lock)
>> {
>> - barrier();
>> - ACCESS_ONCE(*(u8 *)&lock->cnts) = 0;
>> + smp_store_release(&lock->wmode, 0);
>> }
>> #endif
> I reckon you could actually use this in the asm-generic header and remove
> the x86 arch version altogether. Most architectures support single-copy
> atomic byte access and those that don't (alpha?) can just not use qrwlock
> (or override write_unlock with atomic_sub).
>
> I already have a patch making this change, so I'm happy either way.
Yes, I am aware of that. If you have a patch to make that change, I am
fine with that too.
Cheers,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists