[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150731155713.GQ27280@linux.vnet.ibm.com>
Date: Fri, 31 Jul 2015 08:57:13 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
jiangshanlai@...il.com, dipankar@...ibm.com,
akpm@...ux-foundation.org, mathieu.desnoyers@...icios.com,
josh@...htriplett.org, tglx@...utronix.de, rostedt@...dmis.org,
dhowells@...hat.com, edumazet@...gle.com, dvhart@...ux.intel.com,
fweisbec@...il.com, oleg@...hat.com, bobby.prani@...il.com,
dave@...olabs.net, waiman.long@...com
Subject: Re: [PATCH tip/core/rcu 19/19] rcu: Add fastpath bypassing funnel
locking
On Thu, Jul 30, 2015 at 06:34:27PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 30, 2015 at 08:34:52AM -0700, Paul E. McKenney wrote:
> > On Thu, Jul 30, 2015 at 04:44:55PM +0200, Peter Zijlstra wrote:
>
> > > If the extra read before the cmpxchg() does not hurt, we should do the
> > > same for mutex and make the above redundant.
> >
> > I am pretty sure that different hardware wants it done differently. :-/
>
> I think that most archs won't notice since any RmW includes a load of
> that variable anyhow. The only case where it can matter is if the RmW is
> done outside of the normal cache hierarchy -- like on Power, where the
> ll/sc bypasses the L1.
Some years back, AMD and Intel variants of x86 had different preferences
on this matter. Timings indicated that one or the other of them (I
cannot recall which) would get the cacheline shared, then have to get
it exclusive, while the other would get it exclusive to begin with.
I honestly do not know what the preferences of current Power hardware
might be.
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists