[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1393358179.7727.34.camel@j-VirtualBox>
Date: Tue, 25 Feb 2014 11:56:19 -0800
From: Jason Low <jason.low2@...com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: linux-kernel@...r.kernel.org, Waiman Long <waiman.long@...com>,
mingo@...nel.org, paulmck@...ux.vnet.ibm.com,
torvalds@...ux-foundation.org, tglx@...utronix.de, riel@...hat.com,
akpm@...ux-foundation.org, davidlohr@...com, hpa@...or.com,
andi@...stfloor.org, aswin@...com, scott.norton@...com,
chegu_vinod@...com
Subject: Re: [PATCH 5/8] locking, mutex: Cancelable MCS lock for adaptive
spinning
On Mon, 2014-02-10 at 20:58 +0100, Peter Zijlstra wrote:
> +unqueue:
> + /*
> + * Step - A -- stabilize @prev
> + *
> + * Undo our @prev->next assignment; this will make @prev's
> + * unlock()/unqueue() wait for a next pointer since @lock points to us
> + * (or later).
> + */
> +
> + for (;;) {
> + if (prev->next == node &&
> + cmpxchg(&prev->next, node, NULL) == node)
> + break;
> +
> + /*
> + * We can only fail the cmpxchg() racing against an unlock(),
> + * in which case we should observe @node->locked becomming
> + * true.
> + */
> + if (smp_load_acquire(&node->locked))
> + return true;
> +
> + /*
> + * Or we race against a concurrent unqueue()'s step-B, in which
> + * case its step-C will write us a new @node->prev pointer.
> + */
> + prev = ACCESS_ONCE(node->prev);
Should we also add an arch_mutex_cpu_relax() to this loop?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists