[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180407233741.GM3948@linux.vnet.ibm.com>
Date: Sat, 7 Apr 2018 16:37:42 -0700
From: "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Waiman Long <longman@...hat.com>,
Will Deacon <will.deacon@....com>,
linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
mingo@...nel.org, boqun.feng@...il.com, catalin.marinas@....com
Subject: Re: [PATCH 02/10] locking/qspinlock: Remove unbounded cmpxchg loop
from locking slowpath
On Sat, Apr 07, 2018 at 10:47:32AM +0200, Peter Zijlstra wrote:
> On Fri, Apr 06, 2018 at 02:09:53PM -0700, Paul E. McKenney wrote:
> > It would indeed be good to not be in the position of having to trade off
> > forward-progress guarantees against performance, but that does appear to
> > be where we are at the moment.
>
> Depends of course on how unfair cmpxchg is. On x86 we trade one cmpxchg
> loop for another so the patch doesn't cure anything at all there. And
> our cmpxchg has 'some' hardware fairness to it.
>
> So while the patch is 'good' for platforms that have native fetch-or,
> it doesn't help (or in our case even hurts) those that do not.
Might need different implementations for different architectures, then.
Or take advantage of the fact that x86 can do a native fetch-or to the
topmost bit, if that helps.
Thanx, Paul
Powered by blists - more mailing lists