[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1383959827.11046.420.camel@schen9-DESK>
Date: Fri, 08 Nov 2013 17:17:07 -0800
From: Tim Chen <tim.c.chen@...ux.intel.com>
To: paulmck@...ux.vnet.ibm.com
Cc: Waiman Long <Waiman.Long@...com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, Arnd Bergmann <arnd@...db.de>,
linux-arch@...r.kernel.org, x86@...nel.org,
linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Michel Lespinasse <walken@...gle.com>,
Andi Kleen <andi@...stfloor.org>,
Rik van Riel <riel@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>,
George Spelvin <linux@...izon.com>,
"Aswin Chandramouleeswaran\"" <aswin@...com>,
Scott J Norton <scott.norton@...com>
Subject: Re: [PATCH v5 4/4] qrwlock: Use the mcs_spinlock helper functions
for MCS queuing
On Fri, 2013-11-08 at 13:21 -0800, Paul E. McKenney wrote:
> On Mon, Nov 04, 2013 at 12:17:20PM -0500, Waiman Long wrote:
> > There is a pending patch in the rwsem patch series that adds a generic
> > MCS locking helper functions to do MCS-style locking. This patch
> > will enable the queue rwlock to use that generic MCS lock/unlock
> > primitives for internal queuing. This patch should only be merged
> > after the merging of that generic MCS locking patch.
> >
> > Signed-off-by: Waiman Long <Waiman.Long@...com>
>
> This one does might address at least some of the earlier memory-barrier
> issues, at least assuming that the MCS lock is properly memory-barriered.
Paul, will appreciate if you can take a look the latest version
of MCS lock with load-acquire and store-release to see if it is now
properly memory-barriered.
Thanks.
Tim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists