[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <527D68DC.10902@hp.com>
Date: Fri, 08 Nov 2013 17:42:36 -0500
From: Waiman Long <waiman.long@...com>
To: paulmck@...ux.vnet.ibm.com
CC: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, Arnd Bergmann <arnd@...db.de>,
linux-arch@...r.kernel.org, x86@...nel.org,
linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Michel Lespinasse <walken@...gle.com>,
Andi Kleen <andi@...stfloor.org>,
Rik van Riel <riel@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>,
George Spelvin <linux@...izon.com>,
Tim Chen <tim.c.chen@...ux.intel.com>, "" <aswin@...com>,
Scott J Norton <scott.norton@...com>
Subject: Re: [PATCH v5 4/4] qrwlock: Use the mcs_spinlock helper functions
for MCS queuing
On 11/08/2013 04:21 PM, Paul E. McKenney wrote:
> On Mon, Nov 04, 2013 at 12:17:20PM -0500, Waiman Long wrote:
>> There is a pending patch in the rwsem patch series that adds a generic
>> MCS locking helper functions to do MCS-style locking. This patch
>> will enable the queue rwlock to use that generic MCS lock/unlock
>> primitives for internal queuing. This patch should only be merged
>> after the merging of that generic MCS locking patch.
>>
>> Signed-off-by: Waiman Long<Waiman.Long@...com>
> This one does might address at least some of the earlier memory-barrier
> issues, at least assuming that the MCS lock is properly memory-barriered.
>
> Then again, maybe not. Please see below.
>
> Thanx, Paul
>
>> /*
>> * At the head of the wait queue now, try to increment the reader
>> @@ -172,12 +103,36 @@ void queue_read_lock_slowpath(struct qrwlock *lock)
>> while (ACCESS_ONCE(lock->cnts.writer))
>> cpu_relax();
>> }
>> - rspin_until_writer_unlock(lock, 1);
>> - signal_next(lock,&node);
>> + /*
>> + * Increment reader count& wait until writer unlock
>> + */
>> + cnts.rw = xadd(&lock->cnts.rw, QRW_READER_BIAS);
>> + rspin_until_writer_unlock(lock, cnts);
>> + mcs_spin_unlock(&lock->waitq,&node);
> But mcs_spin_unlock() is only required to do a RELEASE barrier, which
> could still allow critical-section leakage.
Yes, that is a problem. I will try to add an ACQUIRE barrier in reading
the writer byte.
-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists