[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201606030718.u537FRan010002@mx0a-001b2d01.pphosted.com>
Date: Fri, 03 Jun 2016 15:17:51 +0800
From: xinhui <xinhui.pan@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>,
Arnd Bergmann <arnd@...db.de>
CC: linux-arch@...r.kernel.org, linux-kernel@...r.kernel.org,
waiman.long@...com
Subject: Re: [PATCH] locking/qrwlock: fix write unlock issue in big endian
On 2016年06月02日 19:02, Peter Zijlstra wrote:
> On Thu, Jun 02, 2016 at 12:44:51PM +0200, Arnd Bergmann wrote:
>> On Thursday, June 2, 2016 6:09:08 PM CEST Pan Xinhui wrote:
>>> diff --git a/include/asm-generic/qrwlock.h b/include/asm-generic/qrwlock.h
>>> index 54a8e65..eadd7a3 100644
>>> --- a/include/asm-generic/qrwlock.h
>>> +++ b/include/asm-generic/qrwlock.h
>>> @@ -139,7 +139,7 @@ static inline void queued_read_unlock(struct qrwlock *lock)
>>> */
>>> static inline void queued_write_unlock(struct qrwlock *lock)
>>> {
>>> - smp_store_release((u8 *)&lock->cnts, 0);
>>> + (void)atomic_sub_return_release(_QW_LOCKED, &lock->cnts);
>>> }
>>
>> Isn't this more expensive than the existing version?
>
> Yes, loads. And while this might be a suitable fix for asm-generic, it
> will introduce a fairly large regression on x86 (which is currently the
> only user of this).
>
well, to show respect to struct __qrwlock private field.
We can keep smp_store_release((u8 *)&lock->cnts, 0) in little_endian machine.
as this should be quick and no performance issue to all other archs(although there is only 1 now)
BUT, We need use (void)atomic_sub_return_release(_QW_LOCKED, &lock->cnts) in big_endian machine.
because it's bad to export struct __qrwlock and set its private field to NULL.
How about code like below.
static inline void queued_write_unlock(struct qrwlock *lock)
{
#ifdef __BIG_ENDIAN
(void)atomic_sub_return_release(_QW_LOCKED, &lock->cnts);
#else
smp_store_release((u8 *)&lock->cnts, 0);
#endif
}
BUT I think that would make thing a little complex to understand. :(
So at last, in my opinion, I suggest my patch :)
any thoughts?
thanks
xinhui
Powered by blists - more mailing lists