[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140211093805.GA28048@gmail.com>
Date: Tue, 11 Feb 2014 10:38:06 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Jason Low <jason.low2@...com>, linux-kernel@...r.kernel.org,
Waiman Long <waiman.long@...com>, paulmck@...ux.vnet.ibm.com,
torvalds@...ux-foundation.org, tglx@...utronix.de, riel@...hat.com,
akpm@...ux-foundation.org, davidlohr@...com, hpa@...or.com,
andi@...stfloor.org, aswin@...com, scott.norton@...com,
chegu_vinod@...com
Subject: Re: [PATCH 5/8] locking, mutex: Cancelable MCS lock for adaptive
spinning
* Peter Zijlstra <peterz@...radead.org> wrote:
> On Mon, Feb 10, 2014 at 02:04:22PM -0800, Jason Low wrote:
> > On Mon, 2014-02-10 at 22:32 +0100, Peter Zijlstra wrote:
> > > Is adding that really much faster than the relatively straight path
> > > oqs_wait_next() would walk to bit the same exit?
> > >
> > > The only reason I pulled out the above cmpxchg() is because its the
> > > uncontended fast path, which seems like a special enough case.
> >
> > So it would avoid 2 extra checks (*lock == node) and (node->next) in the
> > oqs_wait_next() path, which aren't necessary when node->next != NULL.
> >
> > And I think node->next != NULL can be considered a special enough case
> > after the cmpxchg() fails because in the contended case, we're expecting
> > the node->next to be pointing at something. The only times node->next is
> > NULL after cmpxchg() fails are during a very small race window with the
> > osq_lock(), and when the next node is unqueuing due to need_resched,
> > which is also a very small window.
>
> True all; now if only we had a useful benchmark so we could test if
> it makes a difference or not :-)
Having useful 'perf bench lock' sub-test(s) that mimic the AIM7
workload (and other workloads that excercise locking) would address
that concern to a large degree.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists