[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140226092210.GH18404@twins.programming.kicks-ass.net>
Date: Wed, 26 Feb 2014 10:22:10 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Jason Low <jason.low2@...com>
Cc: linux-kernel@...r.kernel.org, Waiman Long <waiman.long@...com>,
mingo@...nel.org, paulmck@...ux.vnet.ibm.com,
torvalds@...ux-foundation.org, tglx@...utronix.de, riel@...hat.com,
akpm@...ux-foundation.org, davidlohr@...com, hpa@...or.com,
andi@...stfloor.org, aswin@...com, scott.norton@...com,
chegu_vinod@...com
Subject: Re: [PATCH 5/8] locking, mutex: Cancelable MCS lock for adaptive
spinning
On Tue, Feb 25, 2014 at 11:56:19AM -0800, Jason Low wrote:
> On Mon, 2014-02-10 at 20:58 +0100, Peter Zijlstra wrote:
>
> > +unqueue:
> > + /*
> > + * Step - A -- stabilize @prev
> > + *
> > + * Undo our @prev->next assignment; this will make @prev's
> > + * unlock()/unqueue() wait for a next pointer since @lock points to us
> > + * (or later).
> > + */
> > +
> > + for (;;) {
> > + if (prev->next == node &&
> > + cmpxchg(&prev->next, node, NULL) == node)
> > + break;
> > +
> > + /*
> > + * We can only fail the cmpxchg() racing against an unlock(),
> > + * in which case we should observe @node->locked becomming
> > + * true.
> > + */
> > + if (smp_load_acquire(&node->locked))
> > + return true;
I've stuck on in right about here. So that we don't unduly delay the
cmpxchg() after the load of prev.
> > +
> > + /*
> > + * Or we race against a concurrent unqueue()'s step-B, in which
> > + * case its step-C will write us a new @node->prev pointer.
> > + */
> > + prev = ACCESS_ONCE(node->prev);
>
> Should we also add an arch_mutex_cpu_relax() to this loop?
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists